Adding Image Editor in Phimpme Android app

Image editing is a core feature of our Phimpme Android Application. We are using this already existing components Image Editor. Currently our app flow is Gallery → Open images. Now we add an option menu in the top named “Edit” which redirect to the edit Image Activity with that image path. Perform the edit operations there and apply the changed and finally updated the album with new file.

The task is to first get the image path pass to the EditImageActivity and then save the edited image in different folder after user finish with editing.

Below are the high level architecture of how it is achieved with help of ImageEditor.

Steps to integrate Editor:

  • Get the selected Image URI

Firstly, add the permission in manifest file

<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />

Get the local images in your internal or SD Card using Media Store content provider.

Uri uri = MediaStore.Images.Media.EXTERNAL_CONTENT_URI; 

String[] projection = { MediaStore.Images.Media._ID, MediaStore.Images.Media.BUCKET_ID,MediaStore.Images.Media.BUCKET_DISPLAY_NAME }; 

Cursor cursor = getContentResolver().query(uri, projection, null, null, null);

After that you can iterate the cursor to end and get details.

  • Create a separate folder where edited images will be saved.

Now after editing we have to place the edited image into a seperate folder with a new name. You can create folder like this:

File aviaryFolder = new File(baseDir, FOLDER_NAME);

Generating new file which will be saved as edited image

public static File getEmptyFile(String name) {

  File folder = FileUtils.createFolders();

  if (folder != null) {

     if (folder.exists()) {

        File file = new File(folder, name);

        return file;

     }

  }

  return null;

}
FileUtils.getEmptyFile("edited"

     + System.currentTimeMillis() + ".png");

 

It will save new image as named edited<timeinmillis> so it will be unique.

  • StartActivityforResult with the image URI and edit activity

So firstly, when user open image and click on edit. We create an empty file using the above method. Getting the image uri and pass it into the editImageActivity. Other than Uri, also pass the output file path.

And startActivityForResult with intent and request code.

You can track the editing done by user and finally save if there is some edit done, like in Phimpme we keep a counter which increased on change of Bitmap.

Update the Album by putting the image Media Store

ContentValues values = new ContentValues(2);

String extensionName = getExtensionName(dstPath);

values.put(MediaStore.Images.Media.MIME_TYPE, "image/" + (TextUtils.isEmpty(extensionName) ? "jpeg" : extensionName));

values.put(MediaStore.Images.Media.DATA, dstPath);

context.getContentResolver().insert(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, values);

Please visit the whole code here: github.com/fossasia/phimpme-android

Resources

Continue ReadingAdding Image Editor in Phimpme Android app

Curve-Fitting in the PSLab Android App

One of the key features of PSLab is the Oscilloscope. An oscilloscope allows observation of temporal variations in electrical signals. Its main purpose is to record the input voltage level at highly precise intervals and display the acquired data as a plot. This conveys information about the signal such as the amplitude of fluctuations, periodicity, and the level of noise in the signal. The Oscilloscope helps us to observe varying the waveform of electronic signals, it is obvious it measures a series of data points that need to be plotted on the graph of the instantaneous signal voltage as a function of time.

When periodic signals such as sine waves or square waves are read by the Oscilloscope, curve fitting functions are used to construct a curve that has the best fit to a series of data points. Curve fitting is also used on data points generated by sensors, for example, a damped sine fit is used to study the damping of the simple pendulums. The curve fitting functions are already written in Python using libraries like numpy and scipy. analyticsClass.py provides almost all the curve fitting functions used in PSLab. For the Android, implementation we need to provide the same functionality in Java.

More about Curve-fitting

Technically speaking, Curve-fitting is the process of constructing a curve or mathematical function, that has the best fit to a series of data points, possibly subject to constraints.

Let’s understand it with an example.

Exponential Fit

The dots in the above image represent data points and the line represents the best curve fit.

In the image, data points are been plotted on the graph. An exponential fit to the given series of data can be used as an aid for data visualization. There can be many types of curve fits like sine fit, polynomial fit, exponential fit, damped sine fit, square fit etc.

Steps to convert the Python code into Java code.

1. Decoding the code

At first, we need to identify and understand the relevant code that exists in the PSLab Python project. The following is the Python code for exponential fit.

import numpy as np
def func(self, x, a, b, c):
return a * np.exp(-x/ b) + c

This is the model function. It takes the independent variable ie. x as the first argument and the parameters to fit as separate remaining arguments.

def fit_exp(self, t, v):    
    from scipy.optimize import curve_fit
    size = len(t)
    v80 = v[0] * 0.8
    for k in range(size - 1):
        if v[k] < v80:
            rc = t[k] / .223
            break
pg = [v[0], rc, 0]

Here, we are calculating the initial guess for the parameters.

po, err = curve_fit(self.func, t, v, pg) 

curve_fit function is called here where model function func, voltage array v, time array t and a list of initial guess parameters pg are the parameters.

if abs(err[0][0]) > 0.1:
    return None, None
vf = po[0] * np.exp(-t/po[1]) + po[2]
return po, vf

2. Curve-fitting in Java

The next step is to implement the functionalities in Java. The following is the code of exponential fit written in JAVA using Apache maths commons API.

ParametricUnivariateFunction exponentialParametricUnivariateFunction = new ParametricUnivariateFunction() {
        @Override
        public double value(double x, double... parameters) {
            double a = parameters[0];
            double b = parameters[1];
            double c = parameters[2];
            return a * exp(-x / b) + c;
        }
        @Override
        public double[] gradient(double x, double... parameters) {
            double a = parameters[0];
            double b = parameters[1];
            double c = parameters[2];
            return new double[]{
                    exp(-x / b),
                    (a * exp(-x / b) * x) / (b * b),
                    1
            };                                                     
        }
    };

ParametricUnivariteFunction is an interface representing a real function which depends on an independent variable and some extra parameters. It is the model function that we used in Python.

It has two methods that are value and gradient. Value takes an independent variable and some parameters and returns the function value.

Gradient returns the double array of partial derivatives of the function with respect to each parameter (not independent parameter x).

 public ArrayList<double[]> fitExponential(double time[], double voltage[]) {
        double size = time.length;
        double v80 = voltage[0] * 0.8;
        double rc = 0;
        double[] vf = new double[time.length]; 
        for (int k = 0; k < size - 1; k++) {
            if (voltage[k] < v80) {
                rc = time[k] / .223;
                break;
          	}
        }
        double[] initialGuess = new double[]{voltage[0], rc, 0};
        //initialize the optimizer and curve fitter.
       LevenbergMarquardtOptimizer optimizer = new LevenbergMarquardtOptimizer()

LevenbergMarquardtOptimizer solves a least square problem using Levenberg Marquardt algorithm (LMA).

CurveFitter fitter = new CurveFitter(optimizer);

LevenbergMarquardtOptimizer is used by CurveFitter for the curve fitting.

 for (int i = 0; i < time.length; i++)
            fitter.addObservedPoint(time[i], voltage[i]);

addObservedPoint adds data points to the CurveFitter instance.

double[] result = fitter.fit(exponentialParametricUnivariateFunction,initialGuess);

fit method with ParametricUnivariteFunction and guess parameters as parameters return an array of fitted parameters.

  for (int i = 0; i < time.length; i++)
            vf[i] = result[0] * exp(-time[i] / result[1]) + result[2];
        return new ArrayList<double[]>(Arrays.asList(result, vf));    }

Additional Notes

Exponential fit implementation in both JAVA and Python uses Levenberg-Marquardt Algorithm. The Levenberg-Marquardt algorithm solves nonlinear least squares problems.

A detailed account about Levenberg-Marquardt Algorithm and least square problem is available here.

Least squares is a standard approach in regression analysis to the approximate solution of overdetermined systems. It is very useful in curve fitting. Let’s understand it with an example.

In the above graph blue dots represent data points, and L1 and L2 represent lines of fitted value by the models. So, what least square does is, it calculates the sum of the squares of distances between the actual values and the fitted values, and the one with least value is the line of best fit. Here, it’s clear L1 is the winner….

Resources

Continue ReadingCurve-Fitting in the PSLab Android App

Implementing Real Time Speech to Text in SUSI Android App

Android has a very interesting feature built in called Speech to Text. As the name suggests, it converts user’s speech into text. So, android provides this feature in form of a recognizer intent. Just the below 4 line code is needed to start recognizer intent to capture speech and convert into text. This was implemented earlier in SUSI Android App too.  Simple, isn’t it?

Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
       RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE,
       "com.domain.app");
startActivityForResult(intent, REQ_CODE_SPEECH_INPUT);

But there is a problem here. It popups an alert dialog box to take a speech input like this :

Although this is great and working but there were many disadvantages of using this :

  1. It breaks connection between user and app.
  2. When user speaks, it first listens to complete sentence and then convert it into text. So, user was not able to see what the first word of sentence was until he says the complete sentence.
  3. There is no way for user to check if the sentence he is speaking is actually converting correctly to speech or not. He only gets to know about it when the sentence is completed.
  4. User can not cancel the conversion of Speech to text if some word is converted wrongly because the user doesn’t even get to know if the word is converted wrongly. It is kind of suspense that a whether the conversion is right or not.

So, now the main challenge was how can we improve the Speech to Text and implement it something like Google now?  So, what google now does is, it converts speech to text alongside taking input making it a real time speech to text convertor. So, basically if you speak second word of sentence, the first word is already converted and displayed. You can also cancel the conversion of rest of the sentence if the first word of sentence is converted wrongly.

I searched this problem a lot on internet, read various blogs and stackoverflow answers, read Documentation of Speech to Text until I finally came across SpeechRecognizer class and RecognitionListener. The plan was to use object of SpeechRecognizer class with RecognizerIntent and RecognitionListener to generate partial results.

So, I just added this line to the the above code snippet of recognizer intent.

intent.putExtra(RecognizerIntent.EXTRA_PARTIAL_RESULTS,true);

And introduced SpeechRecognizer class and RecognitionListener like this

 SpeechRecognizer recognizer = SpeechRecognizer
   .createSpeechRecognizer(this.getApplicationContext());

RecognitionListener listener = new RecognitionListener() {
   @Override
   public void onResults(Bundle results) {
   }
 
   @Override
   public void onError(int error) {
   }
 
   @Override
   public void onPartialResults(Bundle partialResults) {
    //The main method required. Just keep printing results in this.
   }
};

Now combine SpeechRecognizer class with RecognizerIntent and RecognitionListener with these two lines.

recognizer.setRecognitionListener(listener);
recognizer.startListening(intent);

Everything is now set up. You can now use result in onPartialResults method and keep displaying partial results. So, if you want to speak “My name is…” , “My” will be displayed on screen by the time you speak “name” and “My name” will be displayed on screen by the time you speak “is” . Isn’t it cool?

This is how Realtime Speech to Text is implemented in SUSI Android App. If you want to implement this in your app too, just follow the above steps and you are good to go. So, the results look like these.

  

Resources

Continue ReadingImplementing Real Time Speech to Text in SUSI Android App
Read more about the article Using Activities with Bottom navigation view in Phimpme Android
BottonNavigationView

Using Activities with Bottom navigation view in Phimpme Android

Can we use Bottom Navigation View with activities? In the FOSSASIA Phimpme Project we integrated two big projects such as Open Camera and Leafpic. Both are activity based projects and not developed over fragment. The bottom navigation we use will work with fragment.

In Leafpic as well as in Open Camera, there are a lot of Multi level inheritance which runs the app components. The problem faced to continue to work on activities. For theme management using base activity is necessary. The base activity extend to AppcompatActivity. The leafpic code hierarchy is very complex and depend on various activities. Better way is to shift to activities for using Bottom navigation view and  work with activity similar like fragments.

Image source: https://storage.googleapis.com/material-design/

Solution:

This is possible if we copy the bottom navigation item in each of the activity and setting the clicked state of it as per the menu item ID.

Steps to Implement (How I did in Phimpme)

  • Add a Base Activity which takes care of two thing

    • Taking the layout element and setting in the activity.
    • Set up the correct navigation menu item clicked

Function selectBottomNavigationBarItem(int itemId) will take care of what item is currently active. Called this function in onStart(), getNavigationMenuItemId() function will return the menu id and selectBottomNavigagionBarItem update the bottom navigation view.

public abstract class BaseActivity extends AppCompatActivity implements BottomNavigationView.OnNavigationItemSelectedListener

private void updateNavigationBarState() {
   int actionId = getNavigationMenuItemId();
   selectBottomNavigationBarItem(actionId);
}

void selectBottomNavigationBarItem(int itemId) {
   Menu menu = navigationView.getMenu();
   for (int i = 0, size = menu.size(); i < size; i++) {
       MenuItem item = menu.getItem(i);
       boolean shouldBeChecked = item.getItemId() == itemId;
       if (shouldBeChecked) {
           item.setChecked(true);
           break;
       }
   }
}
  • Add two abstract functions in the BaseActivity for above task
public abstract int getContentViewId();

public abstract int getNavigationMenuItemId();
  • Extend every activity to the Base Activity

For example I created a blank Activity: AccountsActivity and extend to BaseActivity

public class AccountsActivity extends BaseActivity
  • Implement the abstract function in the every activity and set the correct layout and menu item id.

@Override
public int getContentViewId() {
 return R.layout.content_accounts;
}

@Override
public int getNavigationMenuItemId() {
   return R.id.navigation_accounts;
}
  • Remove the setContentView(R.layout.activity_main);
  • Add the onNavigationItemSelected in BaseActivity

@Override
public boolean onNavigationItemSelected(@NonNull final MenuItem item) {
   switch (item.getItemId()) {
       case R.id.navigation_home:
    startActivity(new Intent(this, LFMainActivity.class));
           Break;
       case R.id.navigation_camera:
           startActivity(new Intent(this, CameraActivity.class));
           Break;
       case R.id.navigation_accounts:
           startActivity(new Intent(this, AccountsActivity.class));
           Break;
   }
   finish();
   return true;
}

Output:

The transition is as smooth as we use bottom navigation views with Fragment. The bottom navigation also appear on above of the activity.

Source: https://stackoverflow.com/a/42811144/2902518

Resources:

Continue ReadingUsing Activities with Bottom navigation view in Phimpme Android

Phimpme: Merging several Android Projects into One Project

To speed up our development of the version 2 of the Phimpme Android app, we decided to use some existing Open Source libraries and projects such as Open Camera app for Camera features and Leafpic for the photo gallery.

Integrating several big projects into one is a crucial step, and it is not difficult. Here are some points to ponder.

  1. Clone project separately. Build project and note down the features which you want to integrate. Explore the manifest file to know the Launcher, services and other permission app is taking.

I explore the Leafpic app manifest file. Found out its Launcher class, and changed it with the our code. Looked at the permissions app required such as Camera, access fine location, write external storage.

  1. Follow Bottom Up approach while integrating. It makes life easier. By the bottom up approach I mean create a new branch and then start deleting the code which is not required. Always work in branch so that no code lost or messed up.

Not everything is required at the moment. So I first integrate the whole leafpic app and then remove the splash screen. Also noted down the features needs to remove such as drawer, search bar etc.

  1. Remove and Commit, In a big project things are interlinked so removing one feature, actually made you remove most of the code. So my strategy is to focus on one feature and when it removed commit the code. This will help you in a reset hard your code anytime.

Like a lot of times I use reset –hard in my code, because it messed up.

  1. Licensing: One thing which is most important in using open source code is their License. Always follow what is written in their License. This is the way we can give credit to their hard work. There are various types of licenses, three famous are Apache, MIT and GNU General Public License. All have their pros and cons.

Apart of License there are some other condition as well. Like in leafpic which we are using in Phimpme has a condition to inform developers about the project in which we are using leafpic.

  1. Aware of File Duplication, sometimes many files have same name which results into file duplication. So refactor them already in the code.

I refactor the MainActivity in leafpic to LFMainActivity to not to clash with the existing. Resolve package name. This is also one of the tedious task. Android Studio already do a lot of refactoring but some part left.

  • Make sure your manifest file contains correct name.
  • The best way to refactor your package name in xml file is to ctrl+Shift+f in Ubuntu or Edit → Find → Find in path in Android Studio. Then search old package name and replace it with new. You can use alt+ j for multi cursor as well.
  • Clean and rebuild project.

Run app at each step so that you can catch the error at a point. Also use debugger for debugging.

Resources

Continue ReadingPhimpme: Merging several Android Projects into One Project

Using AutoCompleteTextView for interactive search in Open Event Android App

Providing a search option is essential in the Open Event Android app to make it easy for the user to see the desired results only. But sometimes it becomes difficult to implement this with a good performance if the data set is large, so providing simply a list to scroll through may not be enough and efficient. AutoCompleteTextView provides a way to search data by offering the suggestions after a user types in some initial letters of the search query.

How does it work? Actually we feed the data to an adapter which is attached to the view. So, when a user starts typing the query the suggestions starts appearing with similar names in the form of the list.

For example see above. Typing “Hall” gives the user suggestion to pick up the entry which have word “Hall” in it. Making it easier for user to search.

Let’s see how to implement it. For the first step declare the view in XML layout like this. Where our view goes by the id “map_toolbar” and white text colour for the text that will be appearing in it. Input type signifies that the autocomplete and auto correct is enabled.

<AutoCompleteTextView
       android:id="@+id/map_toolbar"
       android:layout_width="match_parent"
       android:layout_height="wrap_content"
       android:ems="12"
       android:hint="@string/search"
       android:shadowColor="@color/white"
       android:imeOptions="actionSearch"
       android:inputType="textAutoComplete|textAutoCorrect"
       android:textColorHint="@color/white"
       android:textColor="@color/white"
/>

Now initialise the adapter in the fragment/activity with the list “searchItems” containing the information about the location. This function is in a fragment so modifying things accordingly. “textView” is the AutoCompleteTextView that we initialised. To explain this function further when a user clicks on any item from the suggestions the soft keyboard hides. You can do define desired operation here. 

Setting up AutoCompleteTextView with the locations

ArrayAdapter<String> adapter = new ArrayAdapter<>(getActivity(), android.R.layout.simple_dropdown_item_1line, searchItems);
textView.setAdapter(adapter);

textView.setOnItemClickListener((parent, view, position, id) -> {

Things you want to do on clicking the item

View mapView = getActivity().getCurrentFocus();

if (mapView != null) {
  InputMethodManager imm =     (InputMethodManager)getActivity().getSystemService(Context.INPUT_METHOD_SERVICE);
  imm.hideSoftInputFromWindow(mapView.getWindowToken(), 0);
}
});

See the complete code here to find the implementation of AutoCompleteTextView in the map fragment of Open Event Android app.

Continue ReadingUsing AutoCompleteTextView for interactive search in Open Event Android App

The Joy of Testing with MVP in Open Event Orga App

Testing applications is hard, and testing Android Applications is harder. The natural way an Android Developer codes is completely untestable. We are born and molded into creating God classes – Activities and perform every bit of logic inside them. Some thought that introduction to Fragments will introduce a little bit of modularity, but we proved otherwise by shifting to God Fragments. When the natural urge of an Android Developer to

  • apply logic,
  • load UI,
  • handle concurrency (hopefully, not using AsyncTask),
  • load data – from the network; disk; and cache,
  • and manage the state of the Activity/Fragment

finally, meets with a new form of revelation that he/she should test his/her application, all of the concepts acquired are shattered in a moment. The person goes to Android Docs to see why he started coding that way and realizes that even the system is flawed – Android Documentation is full of examples promoting God Activities, which they have used to show the use of API for only one reason – brevity. It’s not uncommon for us to steal some code from StackOverflow or Android Docs, but we don’t realize that they are presented there without an application environment or structure, all the required component just glued together to get it functionally complete, so that reader does not have to configure it anymore. Why would an answer from StackOverflow load a barcode scanner using a builder? Or build a ContentProvider to show how to query sorted data from SQLite? Or use a singleton for your provider classes? The simple answer is, they won’t. But this doesn’t mean you shouldn’t too.

The first thought that enters developer’s mind when he gets his hand on a piece of code is to paste it in the correct position and get it to work. There is always this moment of hesitation where conscience rings a bell saying, “What are you doing? Is it the right way to do it? How many lines till you stop overloading this Activity? It’s 2000 lines already”. But it dies as soon as we see the feature is working. And why test something which we have confirmed to work, right? I just wrote this code to show a progress bar while the data loads and hide it when it is done. I can see this working, what will I achieve in painfully writing a test for it, mocking the loading conditions and all. Wrong! Unit tests which test your trivial utils like Date Modification, String Parsing, etc are good and needed but are so trivial that they are hard to go wrong, and if they are, they are easy to fix as they have single usage throughout the app, and it is easy to spot bugs and fix them.

The real problem is testing of your app over dynamic conditions, where you have to emulate them so you can see if your app is making the right decisions. The progress bar working example may work for now, but what if it breaks over refactoring, or you wrote the same code elsewhere and forget to hide the progress bar? Simply copying a well-written test and changing 2-3 class names will fail the build and tell you what’s wrong. A well-contracted app can even contain tests to check that there’ll be no memory leaks. But none of this is possible if everything is jumbled into single Activity with callbacks, loaders, UI handling, business logic, etc. You can’t even think that where to begin. In this post, I will briefly discuss, how to design and test applications using MVP pattern.

MVP to the Rescue

Before moving ahead, I must put a disclaimer saying that MVP is a design pattern which follows a very opinionated implementation. The extent to which you want to refactor your app and the number of abstractions you are willing to do are in your hand. There aren’t any golden rules where your implementation will fail to be called as MVP. Remember, the client doesn’t care about architecture, it’s for you, so whatever makes your life and testing easier, works. Also, I’ll use an axiom related to testing here, “Test until fear turns into boredom”. You can apply the same to your MVP implementation. Testing and design patterns are here to eradicate your fear of failure, not to bore you.

So, in this guide, I’ll be using a use case of a QR Code Scanner Activity built using MVP pattern which we have employed in Open Event Orga Application (Github Repo). Because we are focusing on the test pattern and how to make it easy for us to test our app logic, I have omitted the model part from the MVP equation. The reason being that the possible model in the application would have been the camera loader or barcode initializer and the caveats associated with following this is that both these modules rely heavily on the Android specific view classes, namely, SurfaceView and other lifecycle methods. You could always create your way around it to include them in a separate model, but it won’t help us in writing unit tests for them, because firstly, they aren’t our logic to test, and secondly, they can’t be tested in a unit test (those models would have probably just implemented certain setters and getters).

So, the main purpose of our QR Code Scanner class is to scan a QR code and match it with a list of identifiers, and if there is a match, return it successfully to the caller. In this specific example, the identifiers will the ticket IDs of event attendees and the caller will be event organizer scanning QR codes to check the attendee in. The use case sounds simple, but has several mini use cases and dependencies of itself, which we have to take into our account while designing the View and Presenter class. Let’s discuss them one by one:

App State – Start: Activity starts, loads camera

  • Permission Granted: detects that the app already has Camera Permission,
    • starts scanning
  • Permission Absent: detects that the app doesn’t have Camera Permission,
    • Denied: asks for it, denied, shows error
    • Accepted: asks for it, granted, starts scanning

App State – QR Code Detected: Starts parsing them

  • Attendees are not present: Attendees aren’t present because of some reason, either due to internal error or have not loaded yet
    • stops parsing
  • Attendees are present:
    • QR Code does not match with any attendee : Do nothing
    • QR Code matches with one of the attendee : Send attendee to caller

App State: Camera or containing View is getting destroyed

  • Release Camera

There can be much more internal data flows, but this much is sufficient for our example. So, let’s start defining our contracts using the above knowledge. First, we will design our View. So what should be our strategy? Always think of presenters and views to be mapped in a 1:1 relation. They can have conversations with each other, the difference is that the view is only allowed to talk to the presenter, but presenter may talk to models too. So, the view is going to get all of its information from the presenter and can only take action when presenter tells it to, essentially saying that the view is dumb and passive. The presenter can talk to view and model(s), meaning the collection of information presenter has does not have to come from the view only. In fact, the less dependent presenter has to be on view, the better. The main motive of MVP is to make our logic less dependent on views.

Presenter Contract

In our example, presenter relies on the view to tell it when a certain event happens, they can be lifecycle callbacks, permission grants/denies, camera load/destroy or anything purely related to the Android implementation. So, we generally know from the start what kind of information presenter needs from a View, so let’s design our presenter first.

public interface IScanQRPresenter {

    void attach(long eventId, IScanQRView scanQRView);

    void start();

    void detach();

    void cameraPermissionGranted(boolean granted);

    void onBarcodeDetected(Barcode barcode);

    void onScanStarted();

    void onCameraLoaded();

    void onCameraDestroyed();

}

 

The methods are self-explanatory. Don’t worry if you didn’t get why we defined certain hooks like onScanStarted() or why we didn’t define onScanStopped(). These kinds of details will reveal themselves as you develop your components. You can skip to next section if you don’t want to know why we did it.

Basically, you should define a callback for events which are not reliable to be synchronized. What? Let me explain. Let’s say you got your camera permission and requested the view to start scanning, but you have to wait till the camera is loaded as it is not a synchronous call (and it shouldn’t be or your main thread will block). So, instead of that, our presenter will request the camera to load, go in an idle state, wait for the onCameraLoaded() call, and then request the scan to start, and since it is also not a synchronous work, it will go into idle state again and wait for onScanStarted() and then further its work. Whenever the camera is destroyed, the onCameraDestroyed() callback will be called and we will stop the scanning, and since there is nothing to be done after that, we won’t wait till scanning has stopped, thus dropping the need of onScanStopped() callback.

View Contract

View contract will come naturally to you once you have understood the data flow. It will contain the commands presenter will issue on the view and also, the requests for data that view holds. So, this will be our view.

public interface IScanQRView {

    boolean hasCameraPermission();

    void requestCameraPermission();

    void showPermissionError(String error);

    void onScannedAttendee(Attendee attendee);

    void showBarcodePanel(boolean show);

    void showBarcodeData(@NonNull String data);

    void showProgressBar(boolean show);

    void loadCamera();

    void startScan();

    void stopScan();

}

 

Almost all commands are straightforward but let me quickly explain showBarcodePanel(boolean) and showBarcodeData(String). They are used to display the currently visible barcode data to the user.

So, with our implementations set and data flow in place, let’s start writing tests for the feature. Yes, we’ll write tests without implementation and then you’ll see how easy it will be to write your views and presenters with only one goal to mind, to make the tests pass. If your tests are written correctly and cover everything, you should feel confident that your app will work without even seeing the actual implementation, because that is what tests are for. And by making passive and dumb views, imagine how light your instrumentation tests will be. Just check that the individual methods in view implementation are working as expected and you are done! No need to test logic or complex interactions, etc because you have got it covered in the unit tests themselves. This is not only a benefit of data flow tests but also a best practice. You always want to follow DRY, Don’t Repeat Yourself, even while testing.

Tests

So, we will start writing our tests now and you’ll realize how easy it is and all the hard work of abstraction and designing will pay off.

Attach Tests

So, firstly, we will test if the presenter calls appropriate methods when it is attached

@Test
public void shouldLoadAttendeesAutomatically() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));

    scanQRPresenter.start();

    verify(eventRepository).getAttendees(eventId, false);
}

@Test
public void shouldLoadCameraAutomatically() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));

    scanQRPresenter.start();

    verify(scanQRView).loadCamera();
}

 

Here, we are using Mockito to mock our EventDataRepository to return locally defined attendees instead of doing an actual network call. Then we are calling attach on presenter in each method and then we verify in the first test that the presenter is calling getAttendees on the EventDataRepository, and in the second test that it is requesting the view to load the camera.

Note that in the implementation of attach function, both loading of Camera and attendee loading will take place, but it is best practice to test them separately so that when a test fails, we know why it did

Detach Tests

@Test
public void shouldDetachViewOnStop() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));

    scanQRPresenter.start();

    assertNotNull(scanQRPresenter.getView());

    scanQRPresenter.detach();

    assertNull(scanQRPresenter.getView());
}

@Test
public void shouldNotAccessViewAfterDetach() {
    scanQRPresenter.detach();

    scanQRPresenter.start();
    scanQRPresenter.onCameraLoaded();
    scanQRPresenter.cameraPermissionGranted(false);
    scanQRPresenter.cameraPermissionGranted(true);
    scanQRPresenter.onScanStarted();
    scanQRPresenter.onBarcodeDetected(barcode2);
    scanQRPresenter.onCameraDestroyed();

    verifyZeroInteractions(scanQRView);
}

 

In detach tests, we are verifying that after attaching and view not being null, the detach method call makes the presenter leave the reference to the view making it null. And in the second test, we do all possible interactions after detaching and confirm that no call whatsoever was made on the view. Tests like these enforce to avoid memory leaks and check for any NullPointerExceptions that may happen after the view was made null.

Note: This does not mean that memory leaks will not happen if this test passes. You can cause memory leaks by giving the view reference to any long living object, not just the presenter. This just ensures that presenter will not hold the view reference after detach and won’t reference to the null view in future.

Permission Tests

@Test
public void shouldStartScanOnCameraLoadedIfPermissionPresent() {
    when(scanQRView.hasCameraPermission()).thenReturn(true);

    scanQRPresenter.onCameraLoaded();

    verify(scanQRView).startScan();
}

@Test
public void shouldAskPermissionOnCameraLoadedIfPermissionsAbsent() {
    when(scanQRView.hasCameraPermission()).thenReturn(false);

    scanQRPresenter.onCameraLoaded();

    verify(scanQRView).requestCameraPermission();
}

@Test
public void shouldStartScanningOnPermissionGranted() {
    scanQRPresenter.cameraPermissionGranted(true);

    verify(scanQRView).startScan();
}

@Test
public void shouldShowErrorOnPermissionDenied() {
    scanQRPresenter.cameraPermissionGranted(false);

    verify(scanQRView).showPermissionError(matches("(.*permission.*denied.*)|(.*denied.*permission.*)"));
}

The permission tests are straightforward:

Implicit Permission Handling

  1. If the view already has the camera permission and camera has loaded, start scanning
  2. If the view does not have camera permission and the camera has loaded, request the permission

Request Handling

  1. If the request was granted, start scanning
  2. If the permission was denied, show the permission error. Here, I have used regex to match that the error message contains permission and denied words, you can use anyString() from Mockito for more flexibility or a specific message for more tight testing

Camera Destroy Test

@Test
public void shouldStopScanOnCameraDestroyed() {
    scanQRPresenter.onCameraDestroyed();

    verify(scanQRView).stopScan();
}

Pretty simple, stop the scan on destruction of camera

Flow Tests

You can also test that the callback flow happens in order so that not only the unit tests work but also the implementation logic is correct.

/**
 * Checks that the flow of commands happen in order
 */
@Test
public void shouldFollowFlowOnImplicitPermissionGrant() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));
    when(scanQRView.hasCameraPermission()).thenReturn(true);

    scanQRPresenter.start();

    InOrder inOrder = inOrder(scanQRView);
    inOrder.verify(scanQRView).loadCamera();
    scanQRPresenter.onCameraLoaded();
    inOrder.verify(scanQRView).startScan();
}

@Test
public void shouldShowProgressInBetweenImplicitPermissionGrant() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));
    when(scanQRView.hasCameraPermission()).thenReturn(true);

    scanQRPresenter.start();

    InOrder inOrder = inOrder(scanQRView);
    inOrder.verify(scanQRView).showProgressBar(true);
    scanQRPresenter.onCameraLoaded();
    scanQRPresenter.onScanStarted();
    inOrder.verify(scanQRView).showProgressBar(false);
}

Here, we are verifying two things, first that if we have the request granted implicitly, we load the camera and start the scan in order. This test isn’t that useful as we already tested that loading of the camera is done on attach and scan is started when the camera is loaded. In fact, this is an example of breaking the DRY rule. Even though it doesn’t hurt to include this, it also doesn’t help as it does not cover anything that hasn’t already been tested.

The second test is important and tests that progress bar is correctly shown and hidden after certain communications have taken place and the scan has started. Similarly, we can also test the progress bar behavior over all of the possible combinations of cases that can happen. The code snippets below show the tests:

@Test
public void shouldShowProgressInBetweenImplicitPermissionDenyRequestGrant() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));
    when(scanQRView.hasCameraPermission()).thenReturn(false);

    scanQRPresenter.start();

    InOrder inOrder = inOrder(scanQRView);
    inOrder.verify(scanQRView).showProgressBar(true);
    scanQRPresenter.onCameraLoaded();
    scanQRPresenter.cameraPermissionGranted(true);
    scanQRPresenter.onScanStarted();
    inOrder.verify(scanQRView).showProgressBar(false);
}

@Test
public void shouldShowProgressInBetweenImplicitPermissionDenyRequestDeny() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));
    when(scanQRView.hasCameraPermission()).thenReturn(false);

    scanQRPresenter.start();

    InOrder inOrder = inOrder(scanQRView);
    inOrder.verify(scanQRView).showProgressBar(true);
    scanQRPresenter.onCameraLoaded();
    scanQRPresenter.cameraPermissionGranted(false);
    inOrder.verify(scanQRView).showProgressBar(false);
}

QR Code Detection Tests

@Test
public void shouldNotSendAnyBarcodeIfAttendeesAreNull() {
    sendNullInterleaved();

    verify(scanQRView, never()).onScannedAttendee(any(Attendee.class));
}

@Test
public void shouldNotSendAttendeeOnWrongBarcodeDetection() {
    scanQRPresenter.setAttendees(attendees);
    sendNullInterleaved();

    verify(scanQRView, never()).onScannedAttendee(any(Attendee.class));
}

@Test
public void shouldSendAttendeeOnCorrectBarcodeDetection() {
    // Somehow the setting in setUp is not working, a workaround till fix is found
    RxJavaPlugins.setComputationSchedulerHandler(scheduler -> Schedulers.trampoline());

    scanQRPresenter.setAttendees(attendees);

    barcode1.displayValue = "test4-91";
    scanQRPresenter.onBarcodeDetected(barcode1);

    verify(scanQRView).onScannedAttendee(attendees.get(3));
}

 

Lastly, the core tests of our presenter. To explain the tests, let me show you the sendNullInterleaved() method

private void sendNullInterleaved() {
    sendBarcodeBurst(barcode1);
    sendBarcodeBurst(null);
    sendBarcodeBurst(barcode1);
    sendBarcodeBurst(null);
    sendBarcodeBurst(barcode2);
    sendBarcodeBurst(barcode2);
    sendBarcodeBurst(null);
    sendBarcodeBurst(barcode1);
    sendBarcodeBurst(null);
}

 

So what it basically does is send some barcodes interleaved with null (no barcode detected) to the presenter using the onBarcodeDetected method to emulate the real camera sending barcode values with sometimes sending null whenever no barcode is in view.

The first test simply checks that no attendee is sent if the attendee list is null. Seems pretty obvious. The second one checks that if barcode does not match with any attendee’s identifier, it should not send any attendee as well. It does this by setting an arbitrary list of attendees with different identifiers and sending non-matching barcodes to the presenter. Lastly, the successful test, where a single matching barcode is sent to the presenter and it should send that particular attendee with matching identifier.

Phew! Quite a lot of tests and we are done. The tests were not very large and mostly self-explanatory. Now, you just have to implement the view and presenter methods till all the tests light up to be green and you are done! You have created a feature using MVP design and implemented it using test driven development.

So start testing and get a seal of reliability and confidence about your code and the satisfaction of seeing the green bar fill up.

Continue ReadingThe Joy of Testing with MVP in Open Event Orga App

Remove redundancy of images while uploading it from the Phimpme photo app

Uploading images is an important feature for the Phimpme Photo App. While uploading images in the Phimpme application, there was a high probability that the user uploaded redundant images since there was no check on the upload. Therefore to tackle this problem, images should be hashed so that no two same images gets uploaded. To generate a hash of the file, the md5 algorithm and the sha1 algorithm is used.

What is MD5 algorithm?

The MD5 hashing algorithm is a one-way cryptographic function that accepts a message of any length as input and returns as output a fixed-length digest value to be used for authenticating the original message. It produces a 128 bit hash value. 

WhatIs.com

What is SHA1 algorithm?

SHA-1 (Secure Hash Algorithm 1) is a cryptographic hash function designed by the United States National Security Agency and is a U.S. Federal Information Processing Standard published by the United States NIST. SHA-1 produces a 160-bit (20-byte) hash value known as a message digest. A SHA-1 hash value is typically rendered as a hexadecimal number, 40 digits long.  

 – Wikipedia

Reason for using MD5 + SHA1 instead of SHA256

  1. The file becomes highly secured while maintaining its integrity.
  2. The algorithm MD5 + SHA1 is faster in hashing than SHA256.

Implementation

Create a separate Utils class named FileUtils.java. The class contains the static function to get the hash of any file in general. The message digest class is being used in both the algorithms.

The hash is the combination of the md5 hash string along with the SHA1 hash string. The message digest takes up algorithm as its argument to specify which hashing function would be used. The fileinputstream takes up the input file for initialization and the message digest is updated with the data bytes of the file. The StringBuffer class is then used to generate the fixed length hexadecimal value from the message digest since the string is going to be mutable in nature. The string value of the generated string buffer object is returned.

public class FileUtils {

//The utility method to get the hash of the file
 public static String getHash(final File file) throws NoSuchAlgorithmException, IOException {

  return md5(file) + "_" + sha1(file);
 }

//The md5 alogorithm
 public static String md5(final File file) throws NoSuchAlgorithmException, IOException {

  MessageDigest md = MessageDigest.getInstance("MD5");
  FileInputStream fis = new FileInputStream(file);
  byte[] dataBytes = new byte[1024];
  int nread = 0;
  while ((nread = fis.read(dataBytes)) != -1) {
   md.update(dataBytes, 0, nread);
  }

  //convert the byte to hex format
  StringBuffer sb = new StringBuffer("");
  for (byte mdbyte: md.digest()) {
   sb.append(Integer.toString((mdbyte & 0xff) + 0x100, 16).substring(1));
  }

  fis.close();
  md.reset();
  return sb.toString();
 }

//The SHA1 algorithm
 public static String sha1(final File file) throws NoSuchAlgorithmException, IOException {

  MessageDigest md = MessageDigest.getInstance("SHA1");
  FileInputStream fis = new FileInputStream(file);
  byte[] dataBytes = new byte[1024];
  int nread = 0;
  while ((nread = fis.read(dataBytes)) != -1) {
   md.update(dataBytes, 0, nread);
  }

  //convert the byte to hex format
  StringBuffer sb = new StringBuffer("");
  for (byte mdbyte: md.digest()) {
   sb.append(Integer.toString((mdbyte & 0xff) + 0x100, 16).substring(1));
  }

  fis.close();
  md.reset();
  return sb.toString();
 }
}

Conclusion

Physical memory is limited. Hence it is of utmost importance that a particular file is identified using a unique identifier in terms of a fixed hash value. Not only will it be beneficial for optimum space utilization, it will also be useful to track it if necessary.

To learn more about the hash alogorithms, visit – https://www.tutorialspoint.com/cryptography/cryptography_hash_functions.htm  

 

Continue ReadingRemove redundancy of images while uploading it from the Phimpme photo app

Managing Edge Glow Color in Nested ScrollView in Open Event Android App

After the introduction of material design by Google many new UI elements have been introduced. Material design is based on the interaction and movement of colours and objects in real world rather than synthetic unnatural phenomenon. Sometimes it gets messy when we try to keep up with our material aesthetic. The most popular example of it is edge glow colour. The edge glow colour of ListView, RecyclerView, ScrollView and NestedScrollView is managed by the accent colour declared in the styles. We cannot change the accent colour of a particular activity as it is a constant, so it is same across entire app but in popular apps like Contacts by Google it appears different for every individual contacts. Fixing an issue in Open Event Android app, I came across the same problem. In this tutorial I solve this problem particularly for NestedScrollView. See the bottom of screenshots for comparison.

   
  • You need to pre check if the Android version is above or equal to LOLLIPOP before doing this as the setColor() function for edge glow is introduced in LOLLIPOP
  • The fields are declared as “mEdgeGlowTop” and “mEdgeGlowBottom” that we have to modify.
  • We get the glow property of NestedScrollView through EdgeEffectCompat class instead of EdgeEffect class directly, unlike for ListView due its new introduction.

Let’s have a look at the function which accepts arguments color as an integer and nested scroll view of which the color has to be set.

First you have to get the fields “mEdgeGlowTop” and “mEdgeGlowBottom” that signifies the bubbles that are generated when you scroll up and down from Nested Scroll View Class. Similarly “mEdgeGlowLeft”  and “mEdgeGlowRight” for the horizontal scrolling.

public static void changeGlowColor(int color, NestedScrollView scrollView) {
   try {

       Field edgeGlowTop = NestedScrollView.class.getDeclaredField("mEdgeGlowTop");

       edgeGlowTop.setAccessible(true);

       Field edgeGlowBottom = NestedScrollView.class.getDeclaredField("mEdgeGlowBottom");

       edgeGlowBottom.setAccessible(true);

Get the reference to edge effect which is the different part unlike recycler view or list view of setting the edge glow color in nested scrollview.

EdgeEffectCompat edgeEffect = (EdgeEffectCompat) edgeGlowTop.get(scrollView);

       if (edgeEffect == null) {
           edgeEffect = new EdgeEffectCompat(scrollView.getContext());
           edgeGlowTop.set(scrollView, edgeEffect);
       }

       Views.setEdgeGlowColor(edgeEffect, color);

       edgeEffect = (EdgeEffectCompat) edgeGlowBottom.get(scrollView);
       if (edgeEffect == null) {
           edgeEffect = new EdgeEffectCompat(scrollView.getContext());
           edgeGlowBottom.set(scrollView, edgeEffect);
       }

       Views.setEdgeGlowColor(edgeEffect, color);

   } catch (Exception ex) {

       ex.printStackTrace();
   }

}

Finally set the edge glow color. This only works for the versions that are above or equal to LOLLIPOP as edge effect was introduced in the android beginning from those versions.

@TargetApi(Build.VERSION_CODES.LOLLIPOP)

public static void setEdgeGlowColor(@NonNull EdgeEffectCompat edgeEffect, @ColorInt int color) throws Exception {
        Field field = EdgeEffectCompat.class.getDeclaredField("mEdgeEffect");

        field.setAccessible(true);
        EdgeEffect effect = (EdgeEffect) field.get(edgeEffect);
        
        if (effect != null)
            effect.setColor(color);
    }

 

(Don’t forget to catch any exception. You can monitor them by using Log.e(“Error”, ”Message”, e ); for debugging and testing).

Resources

  • https://developer.android.com/reference/android/support/v4/widget/NestedScrollView.html
Continue ReadingManaging Edge Glow Color in Nested ScrollView in Open Event Android App

Use of SwipeRefreshLayout in Phimpme

In an image application, the first thing we want is to get a list of all images from the users device at the start of the application in the splash screen. But what if the number of images in the device is altered by deleting or copying any other images to the device when the application is running. This is where the swipe refresh layout helps us to obtain the real-time list of images from our device by just swiping vertically downwards.

In the Phimpme application, we have made use of the swipe refresh layout to update the image according to the latest data. In this post, I will be discussing how we achieved this in the Phimpme application with the help of some code examples.

Steps to implement the SwipeRefreshLayout:

  1. The first thing we need to do is add the support library to our project build.gradle file.
dependencies {
 compile 'com.android.support:recyclerview-v7:25.3.1'
 compile 'com.android.support:support-v4:25.3.1'
}

2. Now add the SwipeRefreshLayout in Activity.xml file where you want to implement the SwipeRefreshLayout with the recyclerView and the child view should match the parent layout. This can be done using the line of code below:

<android.support.v4.widget.SwipeRefreshLayout
   android:id="@+id/swipeRefreshLayout"
   android:layout_width="match_parent"
   android:layout_height="match_parent"
   android:paddingBottom="@dimen/height_bottombar" > <android.support.v7.widget.RecyclerView
   android:id="@+id/grid_albums"
   android:layout_width="match_parent"
   android:layout_height="match_parent"
   android:layout_gravity="center_horizontal"
   android:scrollbarThumbVertical="@drawable/ic_scrollbar"
   android:scrollbars="vertical" />
</android.support.v4.widget.SwipeRefreshLayout>

3. Now we have to implement the onRefreshListener which handles the refresh operation when a vertical swipe is performed.

Activity.java

SwipeRefreshLayout swipeRefreshLayout = (SwipeRefreshLayout)findViewById(R.id.sr);
swipeRefreshLayout.setOnRefreshListener(new SwipeRefreshLayout.OnRefreshListener() {
   @Override
   public void onRefresh() {
   }

});

 

Now we can do anything in onRefresh method which we want to do when user swipes vertically.

For example, we want to refresh the recyclerView with latest images and display the content so this should be done in a background thread.

So we will use AsyncTask to update the data.

  @Override
 public void onRefresh() {
new AsyncTask().execute(); //  This will execute the AsyncTask
   }
});

AsyncTask.java

private class AsyncTask extends AsyncTask<Void, Integer, Void> {


   @Override

   protected void onPreExecute() {

       swipeRefreshLayout.setRefreshing(true);

       super.onPreExecute();

   }


   @Override

   protected Void doInBackground(Void... arg0) {

       // refresh the content of recyclerView here

       return null;

   }


   @Override

   protected void onPostExecute(Void result) {

    swipeRefreshLayout.setRefreshing(false);

   }

}

 

When the user swipes up vertically the progress bar is shown to screen and which can be done by swipeRefreshLayout.setRefreshing(true) in onPreExecute() method.

In doInBackground() method we have to load the content of RecyclerView through an adapter.

Once the data is loaded and set to recyclerView onPostExecute() method is called and now we have to dismiss the loading progress bar by  

  swipeRefreshLayout.setRefreshing(false);

This is how SwipeRefreshLayout works.

The above code helps us to reload the album content in the gallery.

For more details refer here in the Phimpme project for SwipeRefreshLayout.

https://github.com/fossasia/phimpme-android/blob/development/app/src/main/java/vn/mbm/phimp/me/leafpic/activities/LFMainActivity.java

Resources :

https://developer.android.com/reference/android/support/v4/widget/SwipeRefreshLayout.html

https://developer.android.com/training/swipe/add-swipe-interface.html

Continue ReadingUse of SwipeRefreshLayout in Phimpme