Presenters via Loaders in Open Event Organizer Android App

Open Event Organizer‘s App design follows Model View Presenter (MVP) architecture which facilitates heavy unit testing of the app. In this design pattern, each fragment/activity implements a view interface which uses a presenter interface to interact with a model interface. The presenter contains most of the data of the view. So it is very important to restore presenters after configuration changes like rotation. As on rotation, the complete activity is re-created hence all the fields are destroyed and as a result, everything is re-generated resulting in state loss on configuration change which is unexpected. Open Event Organizer App uses the loader to store/provide presenters to the activity/fragment. Loader survives configuration changes. The idea of using the loader to provide presenter is taken from Antonio Gutierrez’s blog on “Presenters surviving orientation changes with loaders“.

The first thing to do is make a PresenterLoader<T> class extending Loader<T> where T is your presenter’s base interface. The PresenterLoader class in the app looks like:

public class PresenterLoader<T extends IBasePresenter> extends Loader<T> {

   private T presenter;

   ...

   @Override
   protected void onStartLoading() {
       super.onStartLoading();
       deliverResult(presenter);
   }

   @Override
   protected void onReset() {
       super.onReset();
       presenter.detach();
       presenter = null;
   }

   public T getPresenter() {
       return presenter;
   }
}

 

The methods are pretty clear from the names itself. Once this is done, now you are ready to use this loader in for your fragment/activity. Creating a BaseFragment or BaseActivity will be clever as then you don’t have to add same logic everywhere. We will take a use case of an activity. A loader has a unique id by which it is saved in the app. Use unique id for each fragment/activity. Using the id, the loader is obtained in the app.

Loader<P> loader = getSupportLoaderManager().getLoader(getLoaderId());

 

When creating for the first time, the loader is set up with the loader callbacks where we actually set a presenter logic. In the Organizer App, we are using dagger dependency injection for injecting presenter in the app for the first time. If you are not using the dagger, you should create PresenterFactory class containing create method for the presenter. And pass the PresenterFactory object to the PresenterLoader in onCreateLoader. In this case, we are using dagger so it simplifies to this:

getSupportLoaderManager().initLoader(getLoaderId(), null, new LoaderManager.LoaderCallbacks<P>() {
   @Override
   public Loader<P> onCreateLoader(int id, Bundle args) {
       return new PresenterLoader<>(BaseActivity.this, getPresenterProvider().get());
   }

   @Override
   public void onLoadFinished(Loader<P> loader, P presenter) {
       BaseActivity.this.presenter = presenter;
   }

   @Override
   public void onLoaderReset(Loader<P> loader) {
       BaseActivity.this.presenter = null;
   }
});

 

getPresenterProvider method returns Lazy<Presenter> provider to ensure single presenter creation in the activity/fragment. The lifecycle to setup PresenterLoader in activity is onCreate and in the fragment is onActivityCreated. Use presenter field from next lifecycle that is start. If the presenter is used before the start, it creates null pointer exception. For example, if implementing with the BaseFragment, setup loader in onActivityCreated method.

@Override
   protected void onCreate(@Nullable Bundle savedInstanceState) {
       super.onCreate(savedInstanceState);
       Loader<P> loader = getSupportLoaderManager().getLoader(getLoaderId());
       if (loader == null) {
           initLoader();
       } else {
           presenter = ((PresenterLoader<P>) loader).getPresenter();
       }
   }

 

Make sure that your base interface implements some of the basic methods. For example, onDetach, onAttach etc. getLoaderId method must be implemented in each fragment/activity using loaders. The method returns unique id for each fragment/activity. In Organizer App, the method returns layout id of the fragment/activity as a unique id.

Using the loader approach to store/restore presenters helps in surviving their instances in configuration changes in the app. Hence improves the performance.

Links:
Antonio Gutierrez’s blog post about Presenter surviving orientation changes with Loaders in Android
Android Documentation for Loaders

Continue ReadingPresenters via Loaders in Open Event Organizer Android App

Real time Sensor Data Analysis on PSLab Android

PSLab device has the capacity to connect plug and play sensors through the I2C bus. The sensors are capable of providing data in real time. So, the PSLab Android App and the Desktop app need to have the feature to fetch real time sensor values and display the same in the user interface along with plotting the values on a simple graph.

The UI was made following the guidelines of Google’s Material Design and incorporating some ideas from the Science Journal app. Cards are used for making each section of the UI. There are segregated sections for real time updates and plotting where the real time data can be visualised. A methods for fetching the data are run continuously in the background which receive the data from the sensor and then update the screen.

The following section denotes a small portion of the UI responsible for displaying the data on the screen continuously and are quite simple enough. There are a number of TextViews which are being constantly updated on the screen. Their number depends on the type and volume of data sent by the sensor.

<TextView
       android:layout_width="wrap_content"
       android:layout_height="30dp"
       android:layout_gravity="start"
       android:text="@string/ax"
       android:textAlignment="textStart"
       android:textColor="@color/black"
       android:textSize="@dimen/textsize_edittext"
       android:textStyle="bold" />

<TextView
       android:id="@+id/tv_sensor_mpu6050_ax"
       android:layout_width="wrap_content"
       android:layout_height="30dp"
       android:layout_gravity="start"
       android:textAlignment="textStart"
       android:textColor="@color/black"
       android:textSize="@dimen/textsize_edittext"
       android:textStyle="bold" />

 

The section here represents the portion of the UI responsible for displaying the graph. Like all other parts of the UI of PSLab Android, MPAndroidChart is being used here for plotting the graph.

<LinearLayout
       android:layout_width="match_parent"
       android:layout_height="160dp"
       android:layout_marginTop="40dp">

       <com.github.mikephil.charting.charts.LineChart
               android:id="@+id/chart_sensor_mpu6050"
               android:layout_width="match_parent"
               android:layout_height="match_parent"
               android:background="#000" />
</LinearLayout>

 

Since the updates needs to continuous, a process should be continuously run for updating the display of the data and the graph. There are a variety of options available in Android in this regard like using a Timer on the UI thread and keep updating the data continuously, using ASyncTask to run a process in the background etc.

The issue with the former is that since all the processes i.e. fetching the data and updating the textviews & graph will run on the UI thread, the UI will become laggy. So, the developer team chose to use ASyncTask and make all the processes run in the background so that the UI thread functions smoothly.

A new class SensorDataFetch which extends AsyncTask is defined and its object is created in a runnable and the use of runnable ensures that the thread is run continuously till the time the fragment is used by the user.

scienceLab = ScienceLabCommon.scienceLab;
i2c = scienceLab.i2c;
try {
    MPU6050 = new MPU6050(i2c);
} catch (IOException e) {
    e.printStackTrace();
}
Runnable runnable = new Runnable() {
    @Override
    public void run() {
        while (true) {
            if (scienceLab.isConnected()) {
                try {
                    sensorDataFetch = new SensorDataFetch();
                } catch (IOException e) {
                    e.printStackTrace();
                }
                sensorDataFetch.execute();
            }
        }
    }
};
new Thread(runnable).start();

 

The following is the code for the ASyncTask created. There are two methods defined here doInBackground and onPostExecute which are responsible for fetching the data and updating the display respectively.

The raw data is fetched using the getRaw method of the MPU6050 object and stored in an ArrayList. The data type responsible for storing the data will depend on the return type of the getRaw method of each sensor class and might be different for other sensors. The data returned by getRaw is semi-processed and the data just needs to be split in sections before presenting it for display.

The PSLab Android app’s sensor files can be viewed here and they can give a better idea about how the sensors are calibrated, how the intrinsic nonlinearity is taken care of, how the communication actually works etc.

After the data is stored, the control moves to the onPostExecute method, here the textviews on the display and the chart are updated. The updation is slowed down a bit so that the user can visualize the data received.

private class SensorDataFetch extends AsyncTask<Void, Void, Void> {
   MPU6050 MPU6050 = new MPU6050(i2c);
   ArrayList<Double> dataMPU6050 = new ArrayList<Double>();

   private SensorDataFetch(MPU6050 MPU6050) throws IOException {
   }

   @Override
   protected Void doInBackground(Void... params) {
       try {
           if (MPU6050 != null) {
               dataMPU6050 = MPU6050.getRaw();
           }
       } catch (IOException e) {
           e.printStackTrace();
       }
           return null;
   }

   protected void onPostExecute(Void aVoid) {
       super.onPostExecute(aVoid);
       tvSensorMPU6050ax.setText(String.valueOf(dataMPU6050.get(0)));
       tvSensorMPU6050ay.setText(String.valueOf(dataMPU6050.get(1)));
       tvSensorMPU6050az.setText(String.valueOf(dataMPU6050.get(2)));
       tvSensorMPU6050gx.setText(String.valueOf(dataMPU6050.get(3)));
       tvSensorMPU6050gy.setText(String.valueOf(dataMPU6050.get(4)));
       tvSensorMPU6050gz.setText(String.valueOf(dataMPU6050.get(5)));
       tvSensorMPU6050temp.setText(String.valueOf(dataMPU6050.get(6)));
   }
}

The detailed implementation of the same can be found here.

Additional Resources

  1. Learn more about how real time sensor data analysis can be used in various fields like IOT http://ieeexplore.ieee.org/document/7248401/
  2. Google Fit guide on how to use native built-in sensors on phones, smart watches etc. https://developers.google.com/fit/android/sensors
  3. A simple starter guide to build an app capable of real time sensor data analysis http://developer.telerik.com/products/building-an-android-app-that-displays-live-accelerometer-data/
  4. Learn more about using AsyncTask https://developer.android.com/reference/android/os/AsyncTask.html
Continue ReadingReal time Sensor Data Analysis on PSLab Android

Creating a Four Quadrant Graph for PSLab Android App

While working on Oscilloscope in PSLab Android, we had to implement XY mode. XY plotting is part of regular Oscilloscope and in XY plotting the 2 signals are plotted against each other. For XY plotting we require a graph with all 4 quadrants but none of the Graph-View libraries in Android support a 4 quadrants graph. We need to find a solution for this. So, we used canvas class to draw a 4 quadrants graph.  The Canvas class defines methods for drawing text, lines, bitmaps, and many other graphics primitives. Let’s discuss how a 4 quadrants graph is implemented using Canvas.

Initially, a class Plot2D extending View is created along with a constructor in which context, values for X-Axis, Y-Axis.

public class Plot2D extends View {
public Plot2D(Context context, float[] xValues, float[] yValues, int axis) {
   super(context);
   this.xValues = xValues;
   this.yValues = yValues;
   this.axis = axis;
   vectorLength = xValues.length;
   paint = new Paint();
   getAxis(xValues, yValues);
}

 

So, now we need to convert a particular float value in a pixel. This is the most important part and for this, we create a function where we send the value of the pixels, the minimum and the maximum value of the axis and array of float values. We get an array of converted pixel values in return. p[i] = .1 * pixels + ((value[i] – min) / (max – min)) * .8 * pixels; is the way to transform an int value to a respective pixel value.

private int[] toPixel(float pixels, float min, float max, float[] value) {
   double[] p = new double[value.length];
   int[] pInt = new int[value.length];

   for (int i = 0; i < value.length; i++) {
       p[i] = .1 * pixels + ((value[i] - min) / (max - min)) * .8 * pixels;
       pInt[i] = (int) p[i];
   }
   return (pInt);
}

 

For constructing a graph we require to create the axis, add markings/labels and plot data in the graph. To achieve this we will override onDraw method. The parameter to onDraw() is a Canvas object that the view can use to draw itself. First, we need to get various parameters like data to plot, canvas height and width, the location of the x axis and y axis etc.

@Override
protected void onDraw(Canvas canvas) {

   float canvasHeight = getHeight();
   float canvasWidth = getWidth();
   int[] xValuesInPixels = toPixel(canvasWidth, minX, maxX, xValues);
   int[] yValuesInPixels = toPixel(canvasHeight, minY, maxY, yValues);
   int locxAxisInPixels = toPixelInt(canvasHeight, minY, maxY, locxAxis);
   int locyAxisInPixels = toPixelInt(canvasWidth, minX, maxX, locyAxis);

 

Drawing the axis

First, we will draw the axis and for this, we will use the white color. To draw the white color axis line we will the following code.

paint.setColor(Color.WHITE);
paint.setStrokeWidth(5f);
canvas.drawLine(0, canvasHeight - locxAxisInPixels, canvasWidth,
       canvasHeight - locxAxisInPixels, paint);
canvas.drawLine(locyAxisInPixels, 0, locyAxisInPixels, canvasHeight,
       paint);

 

Adding the labels

After drawing the axis lines, now we need to mark labels for both x and y axis. For this, we use the following code in onDraw method. By this, the axis labels are automatically marked after a fixed distance. The no. of labels depends on the value of n. The code ensures that the markings are apt for each of the quadrant, for example in the first quadrant the markings of the x axis is below the axis, whereas markings of the y axis are to the left.

float temp = 0.0f;
int n = 8;
for (int i = 1; i <= n; i++) {
    if (i <= n / 2) {
           temp = Math.round(10 * (minX + (i - 1) * (maxX - minX) / n)) / 10;
           canvas.drawText("" + temp,
                   (float) toPixelInt(canvasWidth, minX, maxX, temp),
                   canvasHeight - locxAxisInPixels - 10, paint);
           temp = Math.round(10 * (minY + (i - 1) * (maxY - minY) / n)) / 10;
           canvas.drawText("" + temp, locyAxisInPixels + 10, canvasHeight
                           - (float) toPixelInt(canvasHeight, minY, maxY, temp),
                   paint);
       } else {
           temp = Math.round(10 * (minX + (i - 1) * (maxX - minX) / n)) / 10;
           canvas.drawText("" + temp,
                   (float) toPixelInt(canvasWidth, minX, maxX, temp),
                   canvasHeight - locxAxisInPixels + 30, paint);
           temp = Math.round(10 * (minY + (i - 1) * (maxY - minY) / n)) / 10;
           canvas.drawText("" + temp, locyAxisInPixels - 65, canvasHeight
                           - (float) toPixelInt(canvasHeight, minY, maxY, temp),
                   paint);

By using this code we get the following results

Plotting the data

The last step is to plot the data, to achieve this we first convert float values of x axis and y axis data point to pixels using toPixel method and simply draw it on the graph. In addition to this, we set a red color to the line.

paint.setStrokeWidth(2);
canvas.drawARGB(255, 0, 0, 0);
for (int i = 0; i < vectorLength - 1; i++) {
   paint.setColor(Color.RED);
   canvas.drawLine(xValuesInPixels[i], canvasHeight
           - yValuesInPixels[i], xValuesInPixels[i + 1], canvasHeight
           - yValuesInPixels[i + 1], paint);
}

 

This implements a 4 quadrants graph in PSLab Android app for XY plotting in Oscilloscope Activity. The entire code for the same is available in here.

Resources

  1. A simple 2D Plot class for Android
  2. Android.com reference of Custom Drawing
Continue ReadingCreating a Four Quadrant Graph for PSLab Android App

Using Map View to Display Location of Clicked Image in Phimpme

Previously in Phimpme Android app we used to display all the images on the map, based on their location they have taken. In the upcoming version, we are introducing mapview to specify the location when user checks out the details of the image. In this post, I am explaining that how I have implemented Google’s mapView in Phimpme. Note: The prerequisite to display image location in the map view, an image should have geolocation in its meta data.

Let’s get started

Step -1 : First, enable the setting to ‘Show mapview in Image description’ and save it somewhere.

Although, it’s your choice you want to give users choice to view map or not.

First, we need to turn on the movie visibility from map provider in settings. Right now we are adding two maps in our list Google maps and openstreetmap.

Choose Map Provider from the list in Phimpme

Once you choose your preference, it will get stored in sharedPref. As we are displaying the map in the image details so we need to add Image view of map.

Step -2 : Add a ImageView in your XML, in which we will display map.

<ImageView
   android:id="@+id/photo_map"
   android:layout_width="match_parent"
   android:layout_height="150dp"
   android:visibility="gone"
/>

Keep default visibility of mapview is GONE, If a user will enable map view in setting then we will make this image visible.

Step -3 : Load image with map in ImageView:

Good things are that StaticMapProvier gives you the URL of the image in corresponding to particular GeoLocation, which actually is a view of the map.

Now we need to display mapview when the user taps on details of an image.

ImageView imgMap = (ImageView) dialogLayout.findViewById(R.id.photo_map);
final GeoLocation location;
if ((location = f.getGeoLocation()) != null) {
   PreferenceUtil SP = PreferenceUtil.getInstance(activity.getApplicationContext());

   StaticMapProvider staticMapProvider = StaticMapProvider.fromValue(
           SP.getInt(activity.getString(R.string.preference_map_provider), StaticMapProvider.GOOGLE_MAPS.getValue()));

   Glide.with(activity.getApplicationContext())
           .load(staticMapProvider.getUrl(location))
           .asBitmap()
           .centerCrop()
           .animate(R.anim.fade_in)
           .into(imgMap);

To load an image, we used Glide Image library, you can use any library of your choice.

MapView on the top of dialog box in Phimpme

Step 4: Open navigation on the tap of mapView or Image.

Attach a click listener on the imageView and on click of that create a Uri correspond to geolocation and through an Intent to Android to view that link, it will open Google maps with navigation.

 String uri = String.format(Locale.ENGLISH, "geo:%f,%f?z=%d", location.getLatitude(), location.getLongitude(), 17);
            activity.startActivity(new Intent(Intent.ACTION_VIEW, Uri.parse(uri)));

Resources

Continue ReadingUsing Map View to Display Location of Clicked Image in Phimpme

Acceptance Testing of a Feature in Open Event Frontend

In Open Event Frontend, we have integration tests for ember components which are used throughout the project. But even after those tests, user interaction could pose some errors. We perform acceptance tests to alleviate such scenarios.

Acceptance tests interact with application as the user does and ensures proper functionality of a feature or to determine whether or not the software system has met the requirement specifications. They are quite helpful for ensuring that our core features work properly.

Let us write an acceptance test for register feature in Open Event Frontend.

import { test } from 'qunit';
import moduleForAcceptance from 'open-event-frontend/tests/helpers/module-for-acceptance';

moduleForAcceptance('Acceptance | register');

In the first line we import test from ‘ember-qunit’ (default unit testing helper suite for Ember) which contains all the required test functions. For example, here we are using test function to check the rendering of our component. We can use test function multiple times to check multiple components.

Next, we import moduleForAcceptance from ‘open-event-frontend/tests/helpers/module-for-acceptance’ which deals with application setup and teardown.

test('visiting /register', function(assert) {
  visit('/register');

  andThen(function() {
    assert.equal(currentURL(), '/register');
  });
});

Inside our test function, we simulate visiting  /register route and then check for the current route to be /register.

test('visiting /register and registering with existing user', function(assert) {
  visit('/register');
  andThen(function() {
    assert.equal(currentURL(), '/register');
    fillIn('input[name=email]', 'opev_test_user@nada.email');
    fillIn('input[name=password]', 'opev_test_user');
    fillIn('input[name=password_repeat]', 'opev_test_user');
    click('button[type=submit]');
    andThen(function() {
      assert.equal(currentURL(), '/register');
      // const errorMessageDiv = findWithAssert('.ui.negative.message');
      // assert.equal(errorMessageDiv[0].textContent.trim(), 'An unexpected error occurred.');
    });
  });
});

Then we simulate visiting /register route and register a dummy user. For this, we first go to /register route and then check for the current route to be register route. We fill the register form with appropriate data and hit submit.

test('visiting /register after login', function(assert) {
  login(assert);
  andThen(function() {
    visit('/register');
    andThen(function() {
      assert.equal(currentURL(), '/');
    });
  });
});

The third test is to simulate visiting /register route with user logged in and this is very simple. We just visit /register route and then check if we are are at / route or not because a user redirects to / route when he tries to visit /register after login.

And since we checked for all the possible combinations, to run the test we simply use the following command-

ember test --server

But there is a little demerit to acceptance tests. They boot up the whole EmberJS application and start us at the application.index route. We then have to navigate to the page that contains the feature being tested. Writing acceptance tests for each and every feature would be a big waste of time and CPU cycles. For this reason, only core features are tested for acceptance.

Resources

Continue ReadingAcceptance Testing of a Feature in Open Event Frontend

Implementing QR Code Detector in Open Event Organizer App

One of the main features of Open Event Organizer App is to scan a QR code from an attendee’s ticket to validate his/her entry to an event. The app uses Google’s Vision API library, com.google.android.gms.vision.barcode for QR code detection. In this blog, I talk about how to use this library to implement QR code detection with dynamic frame support in an Android App. The library uses a term barcode for all the supported formats including QR code. Hence in the blog, I use the term barcode for QR code format.

We use Google’s dagger for dependency injections in the app. So all the barcode related dependencies are injected in the activity using the dagger. Basically, there are these two classes – BarcodeDetector and CameraSource. The basic workflow is to create BarcodeDetector object which handles QR code detection. Add a SurfaceView in the layout which is used by the CameraSource to show preview to the user. Pass both of these to CameraSource. Enough talk, let’s look into the code while moving forward from here on. If you are not familiar with dagger dependency injection, I strictly suggest you have a look at some tutorial introducing dagger dependency injection.

So we have a barcode module class which takes care of creating  BarcodeDetector and CameraSource.

@Provides
BarcodeDetector providesBarCodeDetector(Context context) {
   BarcodeDetector barcodeDetector = new BarcodeDetector.Builder(context)
       .setBarcodeFormats(Barcode.QR_CODE)
       .build();
   return barcodeDetector;
}

@Provides
CameraSource providesCameraSource(Context context, BarcodeDetector barcodeDetector) {
   return new CameraSource
       .Builder(context, barcodeDetector)
       ...
       .build();
}

 

You can see in the code that BarcodeDetector is passed to the CameraSource builder. Now comes preview part. The user of the app should be able to see what is actually detected. Google has provided samples showing how to do that. It provides some classes that you can just add to your projects. The classes with the links are – BarcodeGraphic, CameraSourcePreview, GraphicOverlay and BarcodeGraphicTracker.

CameraSourcePreview is the custom view which is used in the QR detecting layout for preview. It handles all the SurfaceView related stuff with the additional BarcodeGraphic view which extends GraphicOveraly which is used to draw dynamic info based on the QR code detected. We use this class to draw a frame around the QR code detected. BarcodeGraphicTracker is used to receive newly detected items, add a graphical representation to an overlay, update the graphics as the item changes, and remove the graphics when the item goes away.

Override draw method of BarcodeGraphic according to your need of how you want to show results on the screen once barcode is detected. The method in the Organizer app looks like:

@Override
public void draw(Canvas canvas) {
   if (barcode == null) {
       return;
   }
   // Draws the bounding box around the barcode.
   RectF rect = new RectF(barcode.getBoundingBox());
   ...
   int width = (int) ((rect.right - rect.left)/3);
   int height = (int) ((rect.top - rect.bottom)/3);

   canvas.drawBitmap(Bitmap.createScaledBitmap(frameBottomLeft, width, height, false), rect.left, rect.top, null);
   ...
   canvas.drawRect(rect, rectPaint);
}

 

The class has a Barcode field which gets updated on barcode detection. In the above method, the field rect gets dimensions of the bounding box of the QR code detector. And accordingly, frames are drawn at the vertices of the rect . Include CameraSourcePreview inclosing GraphicOverlay in the activity’s layout.

<...CameraSourcePreview
   android:id="@+id/preview"
   android:layout_width="match_parent"
   android:layout_height="match_parent">

   <...GraphicOverlay />

</...CameraSourcePreview>

 

CameraSourcePreview and GraphicOverlay are saved in the activity from the layout. Pass CameraSource and GraphicOverlay to the CameraSourcePreview using the method start. Now the last part left is setting the processor to the BarcodeDetector to add a connection to the GraphicOverlay. Use BarcodeGraphicTracker which connects GraphicOverlay to BarcodeDetector. This is done by passing BarcodeTrackerFactory which has create method for BarcodeGraphicTracker to Multiprocessor. The code looks like:

barcodeDetector.setProcessor(
   new MultiProcessor.Builder<>(
       new BarcodeTrackerFactory(graphicOverlay)).build());

 

Now BarcodeDetector is connected to the layout. This will update the preview on the layout as overridden in the draw method of BarcodeGraphic on each barcode detection.

Links:
Google’s Vision API – link
Google Dagger github repo link – https://github.com/google/dagger

Continue ReadingImplementing QR Code Detector in Open Event Organizer App

Adding API endpoint to SUSI.AI for Skill Historization

SUSI Skill CMS is an editor to write and edit skill easily. It follows an API centric approach where the Susi server acts as API server. Using Skill CMS we can browse history of a skill, where we get commit ID, commit message  and name the author who made the changes to that skills. In this blogpost we will see how to fetch complete commit history of a skill in the susi skill repository. A skill is a set of intents. One text file represents one skill, it may contain several intents which all belong together. Susi skills are stored in susi_skill_data repository. We can access any skill based on four tuples parameters model, group, language, skill.  For managing version control in skill data repository, the following dependency is added to build.gradle . JGit is a library which implements the Git functionality in Java.

dependencies {
 compile 'org.eclipse.jgit:org.eclipse.jgit:4.6.1.201703071140-r'
}

To implement our servlet we need to extend our servlet to AbstractAPIHandler. In Susi Server, an abstract class AbstractAPIHandler extending HttpServelets and implementing API handler interface is provided.

public class HistorySkillService extends AbstractAPIHandler implements APIHandler {}

The AbstractAPIHandler checks the permissions of the user using the userroles of and comparing it with the value minimum base role of each servlet. Thus to specify the user permission for a servlet we need Override the getMinimalBaseUserRole method.

 @Override
    public BaseUserRole getMinimalBaseUserRole() {
        return BaseUserRole.ANONYMOUS;
    }

UserRoles can be Admin, Privilege, User, Anonymous. In our case it is Anonymous. A User need not to log in to access this endpoint.

  @Override
    public String getAPIPath() {
        return "/cms/getSkillHistory.json";
    }

This methods sets the api endpoint path. One need to send requests at http://api.susi.ai/cms/getSkillHistory.json to get the modification history of skill. Next we will implement The ServiceImpl method where we will be processing the user request and giving back the service response.

@Override
    public ServiceResponse serviceImpl(Query call, HttpServletResponse response, Authorization rights, final JsonObjectWithDefault permissions) {

        String model_name = call.get("model", "general");
        File model = new File(DAO.model_watch_dir, model_name);
        String group_name = call.get("group", "knowledge");
        File group = new File(model, group_name);
        String language_name = call.get("language", "en");
        File language = new File(group, language_name);
        String skill_name = call.get("skill", "wikipedia");
        File skill = new File(language, skill_name + ".txt");
        JSONArray commitsArray;
        commitsArray = new JSONArray();
        String path = skill.getPath().replace(DAO.model_watch_dir.toString(), "models");
        //Add to git
        FileRepositoryBuilder builder = new FileRepositoryBuilder();
        Repository repository = null;
        try {
            repository = builder.setGitDir((DAO.susi_skill_repo))
                    .readEnvironment() // scan environment GIT_* variables
                    .findGitDir() // scan up the file system tree
                    .build();
            try (Git git = new Git(repository)) {
                Iterable<RevCommit> logs;
                logs = git.log().addPath(path).call();
                int i = 0;
                for (RevCommit rev : logs) {
                    commit = new JSONObject();
                    commit.put("commitRev", rev);
                    commit.put("commitName", rev.getName());
                    commit.put("commitID", rev.getId().getName());
                    commit.put("commit_message", rev.getShortMessage());
                    commit.put("author",rev.getAuthorIdent().getName());
                    commitsArray.put(i, commit);
                    i++;
                } success=true;
            } catch (GitAPIException e) {
                e.printStackTrace();
                success=false;
           } if(commitsArray.length()==0){
            success=false;
        }
        JSONObject result = new JSONObject();
        result.put("commits",commitsArray);
        result.put("success",success);
        return new ServiceResponse(result);
    }

To access any skill we need parameters model, group, language. We get this through call.get method where first parameter is the key for which we want to get the value and second parameter is the default value. Based on received model, group and language browse files in that folder we build the susi_skill_data repository path read the git variables and scan up the file system tree using FileRepositoryBuilder build() method. Next we fetch all the logs of the skill file and store them in json commits array and finally pass as a server response with success messages. In case of exceptions, pass service with success flags as false.

We have successfully implemented the servlet. Check the working of endpoint by sending request like http://api.susi.ai/cms/getSkillHistory.json?model=general&group=knowledge&language=en&skill=bitcoin and checking the response.

Susi skill cms uses this endpoint to fetch the skill history, try it out at http://skills.susi.ai/browseHistory

Resources
Continue ReadingAdding API endpoint to SUSI.AI for Skill Historization

Writing Selenium Tests for Checking Bookmark Feature and Search functionality in Open Event Webapp

We integrated Selenium Testing in the Open Event Webapp and are in full swing in writing tests to check the major features of the webapp. Tests help us to fix the issues/bugs which have been solved earlier but keep on resurging when some new changes are incorporated in the repo. I describe the major features that we are testing in this.

Bookmark Feature
The first major feature that we want to test is the bookmark feature. It allows the users to mark a session they are interested in and view them all at once with a single click on the starred button. We want to ensure that the feature is working on all the pages.

Let us discuss the design of the test. First, we start with tracks page. We select few sessions (2 here) for test and note down their session_ids. Finding an element by its id is simple in Selenium can be done easily. After we find the session element, we then find the mark button inside it (with the help of its class name) and click on it to mark the session. After that, we click on the starred button to display only the marked sessions and proceed to count the number of visible elements on the page. If the number of visible session elements comes out to be 2 (the ones that we marked), it means that the feature is working. If the number deviates, it indicates that something is wrong and the test fails.

82080522-21e8-403d-9906-3b4f420720b9.png

Here is a part of the code implementing the above logic. The whole code can be seen here

// Returns the number of visible session elements on the tracks page
TrackPage.getNoOfVisibleSessionElems = function() {
 return this.findAll(By.className('room-filter')).then(this.getElemsDisplayStatus).then(function(displayArr) {
   return displayArr.reduce(function(counter, value) { return value == 1 ? counter + 1 : counter; }, 0);
 });
};
// Bookmark the sessions, scrolls down the page and then count the number of visible session elements
TrackPage.checkIsolatedBookmark = function() {
 // Sample sessions having ids of 3014 and 3015 being checked for the bookmark feature
 var sessionIdsArr = ['3014', '3015'];
 var self = this;
 return  self.toggleSessionBookmark(sessionIdsArr).then(self.toggleStarredButton.bind(self)).then(function() {
   return self.driver.executeScript('window.scrollTo(0, 400)').then(self.getNoOfVisibleSessionElems.bind(self));
 });
};

Here is the excerpt of code which matches the actual number of visible session elements to the expected number. You can view the whole test script here

//Test for checking the bookmark feature on the tracks page
it('Checking the bookmark toggle', function(done) {
 trackPage.checkIsolatedBookmark().then(function(num) {
   assert.equal(num, 2);
   done();
 }).catch(function(err) {
   done(err);
 });
});

Now, we want to test this feature on the other pages: schedule and rooms page. We can simply follow the same approach as done on the tracks page but it is time expensive. Checking the visibility of all the sessions elements present on the page takes quite some time due to a large number of sessions. We need to think of a different approach.We had already marked two elements on the tracks page. We then go to the schedule page and click on the starred mode. We calculate the current height of the page. We then unmark a session and then recalculate the height of the page again. If the bookmark feature is working, then the height should decrease. This determines the correctness of the test. We follow the same approach on the rooms pages too. While this is not absolutely correct, it is a good way to check the feature. We have already employed the perfect method on the tracks page so there was no need of applying it on the schedule and the rooms page since it would have increased the time of the testing by a quite large margin.

Here is an excerpt of the code. The whole work can be viewed here

RoomPage.checkIsolatedBookmark = function() {
 // We go into starred mode and unmark sessions having id 3015 which was marked previously on tracks pages. If the bookmark feature works, then length of the web page would decrease. Return true if that happens. False otherwise
 var getPageHeight = 'return document.body.scrollHeight';
 var sessionIdsArr = ['3015'];
 var self = this;
 var oldHeight, newHeight;
 return self.toggleStarredButton().then(function() {
   return self.driver.executeScript(getPageHeight).then(function(height) {
     oldHeight = height;
     return self.toggleSessionBookmark(sessionIdsArr).then(function() {
       return self.driver.executeScript(getPageHeight).then(function(height) {
         newHeight = height;
         return oldHeight > newHeight;
       });
     });
   });
 });
};

Search Feature
Now, let us go to the testing of the search feature in the webapp. The main object of focus is the Search bar. It is present on all the pages: tracks, rooms, schedule, and speakers page and allows the user to search for a particular session or a speaker and instantly fetches the result as he/she types.

We want to ensure that this feature works across all the pages. Tracks, Rooms and Schedule pages are similar in a way that they display all the session of the event albeit in a different manner. Any query made on any one of these pages should fetch the same number of session elements on the other pages too. The speaker page contains mostly information about the speakers only. So, we make a single common test for the former three pages and a little different test for the latter page.

Designing a test for this feature is interesting. We want it to be fast and accurate. A simple way to approach this is to think of the components involved. One is the query text which would be entered in the search input bar. Other is the list of the sessions which would match the text entered and will be visible on the page after the text has been entered. We decide upon a text string and a list containing session ids. This list contains the id of the sessions should be visible on the above query and also contain few id of the sessions which do not match the text entered. During the actual test, we enter the decided text string and check the visibility of the sessions which are present in the decided list. If the result matches the expected order, then it means that the feature is working well and the test passes. Otherwise, it means that there is some problem with the default implementation and the test fails.

For eg: We decide upon the search text ‘Mario’ and then note the ids of the sessions which should be visible in that search.

c0e4910f-cf69-4b2a-8cc1-233badb35eee.png

Suppose the list of the ids come out to be

['3017''3029''3013''3031']

We then add few more session ids which should not be visible on that search text. Like we add two extra false ids 3014, 3015. Modified list would be something like this

['3017''3029''3013''3031''3014''3015']

Now we run the test and determine the visibility of the sessions present in the above list, compare it to the expected output and accordingly determine the fate of the test.

Expected: [truetruetruetruefalsefalse]
Actual Output: [truetruetruetruetruetrue]

Then the test would fail since the last two sessions were not expected to be visible.

Here is some code related to it. The whole work can be seen here

function commonSearchTest(text, idList) {
 var self = this;
 var searchText = text || 'Mario';
 // First 4 session ids should show up on default search text and the last two not. If no idList provided for testing, use the idList for the default search text
 var arrId = idList || ['3017', '3029', '3013', '3031', '3014', '3015'];
 var promise = new Promise(function(resolve) {
   self.search(searchText).then(function() {
     var promiseArr = arrId.map(function(curElem) {
       return self.find(By.id(curElem)).isDisplayed();
     });

     self.resetSearchBar().then(function() {
       resolve(Promise.all(promiseArr));
     });
   });
 });
 return promise;
}

Here is the code for comparing the expected and the actual output. You can view the whole file here

it('Checking search functionality', function(done) {
 schedulePage.commonSearchTest().then(function(boolArr) {
   assert.deepEqual(boolArr, [true, true, true, true, false, false]);
   done();
 }).catch(function(err) {
   done(err);
 });
});

The search functionality test for the speaker’s page is done in the same style. Just instead of having the session ids, we work with speaker ids there. Rest everything is done in a similar manner.

Resources:

Continue ReadingWriting Selenium Tests for Checking Bookmark Feature and Search functionality in Open Event Webapp

Implementing Tickets API on Open Event Frontend to Display Tickets

This blog article will illustrate how the tickets are displayed on the public event page in Open Event Frontend, using the tickets API. It will also illustrate the use of the add on, ember-data-has-query, and what role it will play in fetching data from various APIs. Our discussion primarily will involve the public/index route. The primary end point of Open Event API with which we are concerned with for fetching tickets for an event is

GET /v1/events/{event_identifier}/tickets

Since there are multiple  routes under public  including public/index, and they share some common event data, it is efficient to make the call for Event on the public route, rather than repeating it for each sub route, so the model for public route is:

model(params) {
return this.store.findRecord('event', params.event_id, { include: 'social-links' });
}

This modal takes care of fetching all the event data, but as we can see, the tickets are not included in the include parameter. The primary reason for this is the fact that the tickets data is not required on each of the public routes, rather it is required for the index route only. However the tickets have a has-many relationship to events, and it is not possible to make a call for them without calling in the entire event data again. This is where a really useful addon, ember-data-has-many-query comes in.

To quote the official documentation,

Ember Data‘s DS.Store supports querying top-level records using the query function.However, DS.hasMany and DS.belongsTo cannot be queried in the same way.This addon provides a way to query has-many and belongs-to relationships

So we can now proceed with the model for public/index route.

model() {
const eventDetails = this._super(...arguments);
return RSVP.hash({
  event   : eventDetails,
  tickets : eventDetails.query('tickets', {
    filter: [
      {
        and: [
          {
            name : 'sales-starts-at',
            op   : 'le',
            val  : moment().toISOString()
          },
          {
            name : 'sales-ends-at',
            op   : 'ge',
            val  : moment().toISOString()
          }
        ]
      }
    ]
  }),

We make use of this._super(…arguments) to use the event data fetched in the model of public route, eliminating the need for a separate API call for the same. Next, the ember-has-many-query add on allows us to query the tickets of the event, and we apply the filters restricting the tickets to only those, whose sale is live.
After the tickets are fetched they are passed onto the ticket list component to display them. We also need to take care of the cases, where there might be no tickets in case the event organiser is using an external ticket URL for ticketing, which can be easily handled via the is-ticketing-enabled property of events. And in case they are not enabled we don’t render the ticket-list component rather a button linked to the external ticket URL is rendered.  In case where ticketing is enabled the various properties which need to be computed such as the total price of tickets based on user input are handled by the ticket-list component itself.

{{#if model.event.isTicketingEnabled}}
  {{public/ticket-list tickets=model.tickets}}
{{else}}
<div class="ui grid">
  <div class="ui row">
      <a href="{{ticketUrl}}" class="ui right labeled blue icon button">
        <i class="ticket icon"></i>
        {{t 'Order tickets'}}
      </a>
  </div>
  <div class="ui row muted text">
      {{t 'You will be taken to '}} {{ticketUrl}} {{t ' to complete the purchase of tickets'}}
  </div>
</div>
{{/if}}

This is the most efficient way to fetch tickets, and also ensures that only the relevant data is passed to the concerned ticket-list component, without making any extra API calls, and it is made possible by the ember-data-has-many-query add on, with very minor changes required in the adapter and the event model. All that is required to do is make the adapter and the event model extend the RestAdapterMixin and ModelMixin provided by the add on, respectively.

Resources

Continue ReadingImplementing Tickets API on Open Event Frontend to Display Tickets

Implement Marker Clustering in the Open Event Android App

Markers are an integral part of any map based service. In the Open Event Android App for samples like Mozilla All Hands 2017, there are a lot of microlocations that the organizers want to integrate into the app’s map fragment. Due to the presence of large number of markers, the map fragment clutters, thereby harming the user experience. As an example, imagine yourself as the user and you see the map as in the image given below!

Therefore to tackle problem like this, the markers are grouped into clusters. On click of the cluster, the markers get declustered and fall into their respective locations with the map zoomed in.

Implementation

First and foremost, define the libraries to be used by the utilities in the build.gradle of your app module. Make to import the latest versions.

// Googleplay Variant
googleplayCompile 'com.google.android.gms:play-services-maps:10.2.6'
googleplayCompile 'com.google.android.gms:play-services-location:10.2.6'
googleplayCompile 'com.google.maps.android:android-maps-utils:0.4'

 

Implement the ClusterItem interface in your location POJO which will house a marker’s location. The POJO will therefore override the getPostion() method of the ClusterItem interface where you will return the LatLng.

public class MicrolocationClusterWrapper implements ClusterItem {

@Override
public LatLng getPosition() {
   return latLng;
}

}

 

Create a custom Cluster Renderer class that will extend the default cluster renderer with you location POJO as parameter. Implement ClusterManager’s onClusterItemClickListener to listen to marker clicks and add custom colors to them. Set the custom marker properties before the marker items are rendered with the markerOptions inside the onBeforeClusterItemRendered().

@Override
   protected void onBeforeClusterItemRendered(MicrolocationClusterWrapper item, MarkerOptions markerOptions) {
       super.onBeforeClusterItemRendered(item, markerOptions);

       markerOptions.title(item.getMicrolocation().getName());
       if (microlocationClusterWrapper != null && item.equals(microlocationClusterWrapper)) {
           markerOptions.icon(ImageUtils.vectorToBitmap(context, R.drawable.map_marker, R.color.color_primary));
       } else {
           markerOptions.icon(ImageUtils.vectorToBitmap(context, R.drawable.map_marker, R.color.dark_grey));
       }
   }

   @Override
   protected void onClusterItemRendered(final MicrolocationClusterWrapper clusterItem, Marker marker) {
       super.onClusterItemRendered(clusterItem, marker);
       clusterItem.setMarker(marker);
  }

   @Override
   public boolean onClusterItemClick(MicrolocationClusterWrapper item) {
       if (microlocationClusterWrapper != null) {
           getMarker(microlocationClusterWrapper).setIcon(ImageUtils.vectorToBitmap(context, R.drawable.map_marker, R.color.dark_grey));
       }
       microlocationClusterWrapper = item;
       getMarker(item).setIcon(ImageUtils.vectorToBitmap(context, R.drawable.map_marker, R.color.color_primary));
       return false;
   }
}

 

Finally in your map fragment, initialize your map, cluster manager class and your custom cluster renderer you just created. Implement the MapReadyCallback so that the Google Map object is not null. Remember to pass the cluster renderer as a listener for the cluster manager’s cluster item click listener. Use the setOnClusterClickListener to zoom the map on the click of cluster.

private void handleClusterEvents() {
   clusterManager.setOnClusterItemClickListener(clusterRenderer);

   clusterManager.setOnClusterClickListener(cluster -> {
               mMap.animateCamera(CameraUpdateFactory.newLatLngZoom(
                       cluster.getPosition(), (float) Math.floor(mMap
                               .getCameraPosition().zoom + 2)), 300,
                       null);

               return true;
           });

   mMap.setOnMapClickListener(clusterRenderer);
}

 

Conclusion

Maps are an integral part of any event based apps and marker clustering undoubtedly enhances the user experience in Maps.

Resources

  • Marker Clustering Android documentation

https://developers.google.com/maps/documentation/android-api/utility/marker-clustering

  • Complete Code Reference

https://github.com/fossasia/open-event-android/pull/1777

  • Marker Customization in the case of Clustering

https://github.com/googlemaps/google-maps-ios-utils/issues/21

Continue ReadingImplement Marker Clustering in the Open Event Android App