Use objects to pass multiple query parameters when making a request using Retrofit

There are multiple instances where there is a need to make an API call to the SUSI.AI server to send or fetch data. The Android client uses Retrofit to make this process easier and more convenient.
While making a GET or POST request, there are often multiple query parameters that need to sent along with the base url. Here is an example how this is done:

@GET("/cms/getSkillFeedback.json")
Call<GetSkillFeedbackResponse> fetchFeedback(
                          @Query("model") String model,
                          @Query("group") String group,
                          @Query("language") String language,
                          @Query("skill") String skill);

 

It can be seen that the list of params can be very long indeed. A long list of params would lead to more risks of incorrect key value pairs and typos.

This blog would talk about replacing such multiple params with objects. The entire process would be explained with the help of an example of the API call being made to the getSkillFeedback.json API.

Step – 1 : Replace multiple params with a query map.

@GET("/cms/getSkillFeedback.json")
Call<GetSkillFeedbackResponse> fetchFeedback(@QueryMap Map<String, String> query);

 

Step – 2 : Make a data class to hold query param values.

data class FetchFeedbackQuery(
       val model: String,
       val group: String,
       val language: String,
       val skill: String
)

 

Step – 3 : Instead of passing all different strings for different query params, pass an object of the data class. Hence, add the following code to the ISkillDetailsModel.kt interface.

...

fun fetchFeedback(query: FetchFeedbackQuery, listener: OnFetchFeedbackFinishedListener)
...

 

Step – 4 : Add a function in the singleton file (ClientBuilder.java) to get SUSI client. This method should return a call.

...

public static Call<GetSkillFeedbackResponse> fetchFeedbackCall(FetchFeedbackQuery queryObject){
   Map<String, String> queryMap = new HashMap<String, String>();
   queryMap.put("model", queryObject.getModel());
   queryMap.put("group", queryObject.getGroup());
   queryMap.put("language", queryObject.getLanguage());
   queryMap.put("skill", queryObject.getSkill());
   //Similarly add other params that might be needed
   return susiService.fetchFeedback(queryMap);
}
...

 

Step – 5 : Send a request to the getSkillFeedback.json API by passing an object of FetchFeedbackQuery data class to the fetchFeedbackCall method of the ClientBuilder.java file which in turn would return a call to the aforementioned API.

...

override fun fetchFeedback(query: FetchFeedbackQuery, listener:  
                                   ISkillDetailsModel.OnFetchFeedbackFinishedListener) {

   fetchFeedbackResponseCall = ClientBuilder.fetchFeedbackCall(query)
   ...
}

 

No other major changes are needed except that instead of passing individual strings for each query param as params to different methods and creating maps at different places like in a view, create an object of FetchFeedbackQuery class and use it to pass data throughout the project. This ensures type safety. Also, data classes reduce the code length significantly and hence are more convenient to use in practice.

Resources

Continue ReadingUse objects to pass multiple query parameters when making a request using Retrofit

Fetch Five Star Skill Rating from getSkillList API in SUSI.AI Android

SUSI.AI had a thumbs up/down rating system till now, which has now been replaced by a five star skill rating system. Now, the user is allowed to rate the skill based on a five star rating system. The UI components include a rating bar and below the rating bar is a section that displays the skill rating statistics – total number of ratings, average rating and a graph showing the percentage of users who rated the skill with five stars, four stars and so on.

SUSI.AI Skills are rules that are defined in SUSI Skill Data repo which are basically the processed responses that SUSI returns to the user queries. When a user queries something from the SUSI Android app, a query to SUSI Server is made which in turn fetches data from SUSI Skill Data and returns a JSON response to the app. Similarly, to get skill ratings, a call to the ‘/cms/getSkillList.json’ API is made. In this API, the server checks the SUSI Skill Data repo for the skills and returns a JSON response consisting of all the required information like skill name, author name, description, ratings, etc. to the app. Then, this JSON response is parsed to extract individual fields to display the appropriate information in the skill details screen of the app.

API Information

The endpoint to fetch skills is ‘/cms/getSkillList.json’
The endpoints takes three parameters as input –

  • model – It tells the model to which the skill belongs. The default value is set to general.
  • group – It tells the group(category) to which the skill belongs. The default value is set to All.
  • language – It tells the language to which the skill belongs. The default value is set to en.

Since all skills have to be fetched, this API is called for every group individually. For instance, call “https://api.susi.ai/cms/getSkillList.json?group=Knowledge” to get all skills in group “Knowledge”. Similarly, call for other groups.

Here is a sample response of a skill named ‘Capital’ from the group Knowledge :

"capital": {
      "model": "general",
      "group": "Knowledge",
      "language": "en",
      "developer_privacy_policy": null,
      "descriptions": "A skill to tell user about capital of any country.",
      "image": "images/capital.png",
      "author": "chashmeet singh",
      "author_url": "https://github.com/chashmeetsingh",
      "skill_name": "Capital",
      "terms_of_use": null,
      "dynamic_content": true,
      "examples": ["What is the capital of India?"],
      "skill_rating": {
        "negative": "0",
        "positive": "4",
        "feedback_count" : 0,
        "stars": {
          "one_star": 0,
          "four_star": 1,
          "five_star": 0,
          "total_star": 1,
          "three_star": 0,
          "avg_star": 4,
          "two_star": 0
        }
      },
      "creationTime": "2018-03-17T17:11:59Z",
      "lastAccessTime": "2018-06-06T00:46:22Z",
      "lastModifiedTime": "2018-03-17T17:11:59Z"
    },


It consists of all details about the skill called ‘Capital’:

  1. Model (model)
  2. Group (group)
  3. Language (language)
  4. Developer Privacy Policy (developer_privacy_policy)
  5. Description (descriptions)
  6. Image (image)
  7. Author (author)
  8. Author URL (author_url)
  9. Skill name (skill_name)
  10. Terms of Use (terms_of_use)
  11. Content Type (dynamic_content)
  12. Examples (examples)
  13. Skill Rating (skill_rating)
  14. Creation Time (creationTime)
  15. Last Access Time (lastAccessTime)
  16. Last Modified Time (lastModifiedTime)

From among all this information, the information of interest for this blog is Skill Rating. This blog mainly deals with showing how to parse the JSON response to get the skill rating star values, so as to display the actual data in the skill rating graph.

A request to the getSkillList API is made for each group using the GET method.

@GET("/cms/getSkillList.json")
Call<ListSkillsResponse> fetchListSkills(@Query("group") String groups);

It returns a JSON response consisting of all the aforementioned information. Now, to parse the JSON response, do the following :

  1. Add a response for the response received as a result of API call. ListSkillsResponse contains two objects – group and skills.
    This blog is about getting the skill rating, so let us proceed with parsing the required response. The skills object contains the skill data that we need. Hence, next a SkillData class is created.

    class ListSkillsResponse {
       val group: String = "Knowledge"
       val skillMap: Map<String, SkillData> = HashMap()
    }
  2. Now, add the SkillData class. This class defines the response that we saw for ‘Capital’ skill above. It contains skill name, author, skill rating and so on.

    class SkillData : Serializable {
       var image: String = ""
       @SerializedName("author_url")
       @Expose
       var authorUrl: String = ""
       var examples: List<String> = ArrayList()
       @SerializedName("developer_privacy_policy")
       @Expose
       var developerPrivacyPolicy: String = ""
       var author: String = ""
       @SerializedName("skill_name")
       @Expose
       var skillName: String = ""
       @SerializedName("dynamic_content")
       @Expose
       var dynamicContent: Boolean? = null
       @SerializedName("terms_of_use")
       @Expose
       var termsOfUse: String = ""
       var descriptions: String = ""
       @SerializedName("skill_rating")
       @Expose
       var skillRating: SkillRating? = null
    }
    
  3. Now, add the SkillRating class. As what is required is the skill rating, narrowing down to the skill_rating object. The skill_rating object contains the actual rating for each skill i.e. the stars values. So, this files defines the response for the skill_rating object.

    class SkillRating : Serializable {
       var stars: Stars? = null
    }
    
  4. Further, add a Stars class. Ultimately, the values that are needed are the number of users who rated a skill at five stars, four stars and so on and also the total number of users and the average rating. Thus, this file contains the values inside the ‘stars’ object.

    class Stars : Serializable {
       @SerializedName("one_star")
       @Expose
       var oneStar: String? = null
       @SerializedName("two_star")
       @Expose
       var twoStar: String? = null
       @SerializedName("three_star")
       @Expose
       var threeStar: String? = null
       @SerializedName("four_star")
       @Expose
       var fourStar: String? = null
       @SerializedName("five_star")
       @Expose
       var fiveStar: String? = null
       @SerializedName("total_star")
       @Expose
       var totalStar: String? = null
       @SerializedName("avg_star")
       @Expose
       var averageStar: String? = null
    }
    

Now, the parsing is all done. It is time to use these values to plot the skill rating graph and complete the section displaying the five star skill rating.

To plot these values on the skill rating graph refer to the blog on plotting horizontal bar graph using MPAndroid Chart library. In step 5 of the linked blog, replace the second parameter to the BarEntry constructor by the actual values obtained by parsing.

Here is how we do it.

  • To get the total number of ratings
val  totalNumberofRatings: Int = skillData.skillRating?.stars?.totalStars

 

  • To get the average rating
val averageRating: Float = skillData.skillRating?.stars?.averageStars

 

  • To get number of users who rated the skill at five stars
val fiveStarUsers: Int = skillData.skillRating?.stars?.fiveStar

Similarly, get the number of users for fourStar, threeStar, twoStar and oneStar.

Note : If the totalNumberOfRatings equals to zero, then the skill is unrated. In this case, display a message informing the user that the skill is unrated instead of plotting the graph.

Now, as the graph shows the percentage of users who rated the skill at a particular number of stars, calculate the percentage of users corresponding to each rating, parse the result to Float and place it as the second parameter to the BarEntry constructor  as follows :

 entries.add(BarEntry(4f, (fiveStarUsers!!.toFloat() / totalUsers) * 100f)))

Similarly, replace the values for all five entries. Finally, add the total ratings and average rating section and display the detailed skill rating statistics for each skill, as in the following figure.

Resources

 

 

Continue ReadingFetch Five Star Skill Rating from getSkillList API in SUSI.AI Android

Create Call for Speakers with Open Event Organizer Android App

In The Open Event Organizer Android app, we were providing variety of features to the user but the functionality to create call for speakers for an Event was missing. This feature was required to attract speakers from all over the world to participate in the event and make it a success. Theis blog explains how we added this feature to the project us following MVVM Architecture and using libraries like Retrofit, RxJava, Db Flow etc.

Objective

The goal will be to provide an option to the organizer to create Call for Speakers for an Event (if it doesn’t exist already). We will provide the organizer a Floating Action Button in Speakers Call detail layout which will open a fragment which contain all the relevant fields provided by Open Event Server. The organizer can fill up all the fields as per requirement and create the call for speakers.

Specifications

Let’s move on to the implementation details.

First, we will create the model which will contain all the fields offered by server. We are using Lombok library to reduce boilerplate code using annotations. For example, @Data annotation is used to generate getters and setters. @NoArgsConstructor to generate no arguments constructor as the name suggests. Along with that we are using @JonNaming annotation provided by Jackson library to serialize the model using KebabCaseStrategy. Also, the Table annotation provided by Raziz DbFlow library to make a table in database for this model.

@Data
@Type(“speakers-call”)
@NoArgsConstructor
@JsonNaming(PropertyNamingStrategy.KebabCaseStrategy.class)
@Table(database = OrgaDatabase.class, allFields = true)
public class SpeakersCall {@Id(LongIdHandler.class)
@PrimaryKey
public Long id;@Relationship(“event”)
@ForeignKey(onDelete = ForeignKeyAction.CASCADE)
public Event event;

public String announcement;
public String hash;
public String privacy;
public String startsAt;
public String endsAt;
}

To handle this problem we will be using Retrofit 2.3.0 to make Network Requests, which is a REST client for Android and Java by Square Inc. It makes it relatively easy to retrieve and upload JSON (or other structured data) via a REST based Web Service. Also we will be using other awesome libraries like RxJava 2.1.10 (by ReactiveX) to handle tasks asynchronously, Jackson, Jasminb-Json-Api in an MVVM architecture.

We will make a POST request to the server . So we specify the declaration in SpeakersCallApi.java

@POST(“speakers-calls”)
Observable<SpeakersCall> postSpeakersCall(@Body SpeakersCall speakersCall);

We will use the method createSpeakersCall(SpeakersCall speakersCall) in SpeakersCallRepositoryImpl.java to interact with the SpeakersCallApi and make the network request for us. The response will be passed on to the SpeakersCallViewModel method which calls it.

@Override
public Observable<SpeakersCall> createSpeakersCall(SpeakersCall speakersCall) {
if (!repository.isConnected()) {
return Observable.error(new Throwable(Constants.NO_NETWORK));
}return speakersCallApi
.postSpeakersCall(speakersCall)
.doOnNext(created -> {
created.setEvent(speakersCall.getEvent());
repository
.save(SpeakersCall.class, created)
.subscribe();
})
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread());
}

Next, we will see the code for SpeakersCallViewModel which is using MVVM ViewModel to handle the processing logic for SpeakesCallFragment. Here we are passing SpeakersCallRepository instance in constructor. Methods like getProgress, getError and getSuccess pass on a LiveData Object from ViewModel to Fragment and the latter observes the changes on the object and show these in UI. CreateSpeakersCall method is used to call speakersCallRepository to send a POST request to server asynchronously using Retrofit and RxJava. Further, initialize method is used to set initial values of time and date to SpeakersCall object.

public class CreateSpeakersCallViewModel extends ViewModel {
public void createSpeakersCall(long eventId) {
if (!verify())
return;Event event = new Event();event.setId(eventId);
speakersCallLive.getValue().setEvent(event);

compositeDisposable.add(speakersCallRepository.createSpeakersCall(speakersCallLive.getValue())
.doOnSubscribe(disposable -> progress.setValue(true))
.doFinally(() -> progress.setValue(false))
.subscribe(var -> success.setValue(“Speakers Call Created Successfully”),
throwable -> error.setValue(ErrorUtils.getMessage(throwable))));
}
}

We will create a Fragment class to bind the UI elements to the Model class so that they can be processed by ViewModel. The creation form contains less fields therefore BaseBottomSheetFragment is used. In the onStart method, we are binding several LiveData objects like SpeakersCall, Progress, Error, Success to the UI and then observe changes in them. Using LiveData we don’t have to write extra code to handle screen rotation and to update UI when the binded object is modified.

public class CreateSpeakersCallFragment extends BaseBottomSheetFragment implements CreateSpeakersCallView {
@Override
public void onStart() {
super.onStart();
createSpeakersCallViewModel.getSpeakersCall().observe(this, this::showSpeakersCall);
createSpeakersCallViewModel.getProgress().observe(this, this::showProgress);
createSpeakersCallViewModel.getError().observe(this, this::showError);
createSpeakersCallViewModel.getSuccess().observe(this, this::onSuccess);
createSpeakersCallViewModel.initialize();
}
}

We will design the speaker create form using Two Way Data Binding to bind SpeakersCall object to the UI and use TextInputEditText to take user Input. Along with that we have used Time and Date pickers to make it easier to select Date and Time. For viewing the complete code, please refer here.

Thus, we are able to build a responsive, reactive UI using Android Architectural Components.

References

  1. Official documentation of Retrofit 2.x http://square.github.io/retrofit/
  2. Official documentation for RxJava 2.x https://github.com/ReactiveX/RxJava
  3. Official documentation for ViewModel https://developer.android.com/topic/libraries/architecture/viewmodel
  4. Codebase for Open Event Orga App https://github.com/fossasia/open-event-orga-app
Continue ReadingCreate Call for Speakers with Open Event Organizer Android App

Handling Android Runtime permissions in UI Tests in SUSI.AI Android

With the introduction of Marshmallow (API Level 23), in SUSI.AI it was needed to ensure that:

  • It was verified if we had the permission that was needed, when required
  • The user was requested to grant permission, when it deemed appropriate
  • The request (empty states or data feedback) was correctly handled within the UI to represent the outcome of being granted or denied the required permission

You might have written UI tests. What about instances where the app needs the user’s permissions, like allow the app to access contacts on the device, for the tests to run? Would the tests pass when the test is run on Android 6.0+ devices?
And, can Espresso be used to achieve this? Unfortunately, Espresso doesn’t have the ability to access components from outside of the application package. So, how to handle this?
There are two approaches to handle this :
1) Using the UI Automator
2) Using the GrantPermissionRule

Let us have a look at both of these approaches in detail :

Using UI Automator to Handle Runtime Permissions on Android for UI Tests :

UI Automator is a UI testing framework suitable for cross-app functional UI testing across system and installed apps. This framework requires Android 4.3 (API level 18) or higher.
The UI Automator testing framework provides a set of APIs to build UI tests that perform interactions on user apps and system apps. The UI Automator APIs allows you to perform operations such as opening the Settings menu or the app launcher in a test device. This testing framework is well-suited for writing black box-style automated tests, where the test code does not rely on internal implementation details of the target app.

The key features of this testing framework include the following :

  • A viewer to inspect layout hierarchy. For more information, see UI Automator Viewer.
  • An API to retrieve state information and perform operations on the target device. For more information, see Accessing device stateAPIs that support cross-app UI testing. For more information, see UI Automator APIs.
  • Unlike Espresso, UIAutomator can interact with system applications which means that you’ll be able to interact with the permissions dialog, if needed.

    So, how to do this? Well, if you want to grant a permission in a UI test then you need to find the corresponding UiObject that you wish to click on. In our case, the permissions dialog box is the UiObject. This object is a representation of a view – it is not bound to the view but contains information to locate the matching view at runtime, based on the properties of the UiSelector instance within it’s constructor. A UiSelector instance is an object that declares elements to be targeted by the UI test within the layout. You can set various properties such as a text value, class name or content-description, for this UiSelector instance.
    So, once you have your UiObject (the permissions dialog), you can determine which option you want to select and then use click( ) method to grant/deny permission access.

fun allowPermissionsIfNeeded() {
         iIf (BUILD.VERSION.SDK_INT >= 23){
                   var allowPermissions : UiObject = mDevice
                                       .findObject(UiSelector( ).text("ALLOW"))
                   if(allowPermissions.exists( )) {
                             try {
                                       allowPermissions.click( )
                             } catch (e : UiObjectNotFoundException) {
                                       Log.e(TAG, "There is no permission dialog to interact with", e)
                             }
                   } 
         }
}

Similarly, you can also handle “DENY” approach.

So, this is how you can use UI Automator to handle Android runtime permissions for UI testing. Now, let us have a look at the other and a newer approach :

Using GrantPermissionRule to Handle Runtime Permissions on Android for UI Tests :

GrantPermissionRule is used to grant runtime permissions to avoid the permission dialog from showing up and blocking the UI of the app. In this approach, permissions can only be requested for API level 23 (Android M) or higher. All you need to do is to add the following rule to your UI test :

@Rule 
public GrantPermissionRule mRuntimePermissionRule =     
           GrantPermissionRule.grant(android.Manifest
.permission.ACCESS_FINE_LOCATION);

ACCESS_FINE_LOCATION (in the above code) can be replaced by any other permission that your app requires.

This would be also be implemented in the SUSI.AI Android app for UI tests. Unit tests and UI tests form an integral part of a good software. Hence, you need to write quality tests for your projects to detect and fix bugs and flaws easily and conveniently.

Resources

 

Continue ReadingHandling Android Runtime permissions in UI Tests in SUSI.AI Android

Voltage Measurement through Channels in PSLab

The Pocket Science Lab multimeter has got three channels namely CH1,CH2 and CH3 with different ranges for measuring the voltages.This blog will give a brief description on how we measure voltages in channels.

Measuring Voltages at channels can be divided into three parts:-

        1. Communication between between device and Android.
        2. Setting up analog channel (analog constants)
        3. Voltage measuring function of android.

Communication between PSLab device and Android App

The communication between the PSLab device and  Android occurs through the help of UsbManger package of CommunicationHandler class of the app. The main two functions involved in the communication are read and write functions in which we send particular number of bytes and then we receive certain bytes.

The read function :-

public int read(byte[] dest, int bytesToBeRead, int timeoutMillis) throws IOException {
    int numBytesRead = 0;
    //synchronized (mReadBufferLock) {
    int readNow;
    Log.v(TAG, "TO read : " + bytesToBeRead);
    int bytesToBeReadTemp = bytesToBeRead;
    while (numBytesRead < bytesToBeRead) {
        readNow = mConnection.bulkTransfer(mReadEndpoint, mReadBuffer, bytesToBeReadTemp, timeoutMillis);
        if (readNow < 0) {
            Log.e(TAG, "Read Error: " + bytesToBeReadTemp);
            return numBytesRead;
        } else {
            //Log.v(TAG, "Read something" + mReadBuffer);
            System.arraycopy(mReadBuffer, 0, dest, numBytesRead, readNow);
            numBytesRead += readNow;
            bytesToBeReadTemp -= readNow;
            //Log.v(TAG, "READ : " + numBytesRead);
            //Log.v(TAG, "REMAINING: " + bytesToBeRead);
        }
    }
    //}
    Log.v("Bytes Read", "" + numBytesRead);
    return numBytesRead;
}

Similarly the write function is –

public int write(byte[] src, int timeoutMillis) throws IOException {
    if (Build.VERSION.SDK_INT < 18) {
        return writeSupportAPI(src, timeoutMillis);
    }
    int written = 0;
    while (written < src.length) {
        int writeLength, amtWritten;
        //synchronized (mWriteBufferLock) {
        writeLength = Math.min(mWriteBuffer.length, src.length - written);
        // bulk transfer supports offset from API 18
        amtWritten = mConnection.bulkTransfer(mWriteEndpoint, src, written, writeLength, timeoutMillis);
        //}
        if (amtWritten < 0) {
            throw new IOException("Error writing " + writeLength +
                " bytes at offset " + written + " length=" + src.length);
        }
        written += amtWritten;
    }
    return written;
}

Although these are the core functions used for communication but the data received through these functions are further processed using another class known as PacketHandler. In the PacketHandler class also there are two major functions i.e sendByte and getByte(), these are the main functions which are further used in other classes for communication.

The sendByte function:-

public void sendByte(int val) throws IOException {
    if (!connected) {
        throw new IOException("Device not connected");
    }
    if (!loadBurst) {
        try {
            mCommunicationHandler.write(new byte[] {
                (byte)(val & 0xff), (byte)((val >> 8) & 0xff)
            }, timeout);
        } catch (IOException e) {
            Log.e("Error in sending int", e.toString());
            e.printStackTrace();
        }
    } else {
        burstBuffer.put(new byte[] {
            (byte)(val & 0xff), (byte)((val >> 8) & 0xff)
        });
    }
}

As we can see that in this function also the main function used is the write function of communicationHandler but in this class the data is further processed.

Setting Up the Analog Constants

For setting up the ranges, gains and other properties of channels, a different class of AnalogConstants is implemented in the android app, in this class all the properties which are used by the channels are defined which are further used in the sendByte() functions for communication.

public class AnalogConstants {

    public double[] gains = {1, 2, 4, 5, 8, 10, 16, 32, 1 / 11.};
    public String[] allAnalogChannels = {"CH1", "CH2", "CH3", "MIC", "CAP", "SEN", "AN8"};
    public String[] biPolars = {"CH1", "CH2", "CH3", "MIC"};
    public Map<String, double[]> inputRanges = new HashMap<>();
    public Map<String, Integer> picADCMultiplex = new HashMap<>();

    public AnalogConstants() {

        inputRanges.put("CH1", new double[]{16.5, -16.5});
        inputRanges.put("CH2", new double[]{16.5, -16.5});
        inputRanges.put("CH3", new double[]{-3.3, 3.3});
        inputRanges.put("MIC", new double[]{-3.3, 3.3});
        inputRanges.put("CAP", new double[]{0, 3.3});
        inputRanges.put("SEN", new double[]{0, 3.3});
        inputRanges.put("AN8", new double[]{0, 3.3});

        picADCMultiplex.put("CH1", 3);
        picADCMultiplex.put("CH2", 0);
        picADCMultiplex.put("CH3", 1);
        picADCMultiplex.put("MIC", 2);
        picADCMultiplex.put("AN4", 4);
        picADCMultiplex.put("SEN", 7);
        picADCMultiplex.put("CAP", 5);
        picADCMultiplex.put("AN8", 8);

    }
}

Also in the AnalogInput sources class many other properties such as CHOSA( a variable assigned to denote the analog to decimal conversion constant of each channel) are also defined

public AnalogInputSource(String channelName) {
    AnalogConstants analogConstants = new AnalogConstants();
    this.channelName = channelName;
    range = analogConstants.inputRanges.get(channelName);
    gainValues = analogConstants.gains;
    this.CHOSA = analogConstants.picADCMultiplex.get(channelName);
    calPoly10 = new PolynomialFunction(new double[] {
        0.,
        3.3 / 1023,
        0.
    });
    calPoly12 = new PolynomialFunction(new double[] {
        0.,
        3.3 / 4095,
        0.
    });
    if (range[1] - range[0] < 0) {
        inverted = true;
        inversion = -1;
    }
    if (channelName.equals("CH1")) {
        gainEnabled = true;
        gainPGA = 1;
        gain = 0;
    } else if (channelName.equals("CH2")) {
        gainEnabled = true;
        gainPGA = 2;
        gain = 0;
    }
    gain = 0;
    regenerateCalibration();
}

Also in this constructor a polynomial function is also called which further plays an important  role in measuring voltage as it is through this polynomial function we get the voltage of channels in the science lab class , also it is also used in oscilloscope for plotting the graph . So this was the setup of analog channels.

Voltage Measuring Functions

There are two major functions for measuring voltages which are present in the scienceLab class

  • getAverageVoltage
  • getRawableVoltage

Here are the functions

private double getRawAverageVoltage(String channelName) {
    try {
        int chosa = this.calcCHOSA(channelName);
        mPacketHandler.sendByte(mCommandsProto.ADC);
        mPacketHandler.sendByte(mCommandsProto.GET_VOLTAGE_SUMMED);
        mPacketHandler.sendByte(chosa);
        int vSum = mPacketHandler.getVoltageSummation();
        mPacketHandler.getAcknowledgement();
        return vSum / 16.0;
    } catch (IOException | NullPointerException e) {
        e.printStackTrace();
        Log.e(TAG, "Error in getRawAverageVoltage");
    }
    return 0;
}

This is the major function which takes the data from the communicationHandler class via packetHandler. Further this function is used in the getAverageVoltage function.

private double getAverageVoltage(String channelName, Integer sample) {
    if (sample == null) sample = 1;
    PolynomialFunction poly;
    double sum = 0;
    poly = analogInputSources.get(channelName).calPoly12;
    ArrayList < Double > vals = new ArrayList < > ();
    for (int i = 0; i < sample; i++) {
        vals.add(getRawAverageVoltage(channelName));
    }
    for (int j = 0; j < vals.size(); j++) {
        sum = sum + poly.value(vals.get(j));
    }
    return sum / vals.size();
}

This function uses the data from the getRawableVoltage function and uses it the polynomial generated in the analog lasses to calculate the final voltage. Thus this was the core backend of calculating the voltages through channels in PSLab.

Resources:

Continue ReadingVoltage Measurement through Channels in PSLab

Using Multimeter in PSLab Android Application

The Pocket Science Lab as we all know is on the verge of development and new features and UI are added almost every day. One such feature is the implementation of multimeter which I have also discussed in my previous blogs.

Figure (1) : Screenshot of the multimeter

But although many functionality of multimeter such as resistance measurement are working perfectly, there are still various bugs in the multimeter. This blog is dedicated to using multimeter in the android app.

Using the multimeter

Figure (2): Screenshot showing guide of multimeter

Figure (2) shows the guide of the multimeter, i.e how basic basic elements such as resistance and voltage are measured using the multimeter. The demonstration of measuring the resistance and voltage are given below.

Measuring the resistance

The resistance is measure by connecting the SEN pin to the positive end of resistor and the GND pin to the negative end of resistor and then clicking the RESISTANCE button.

                   Figure (3) : Demonstration of resistance measurement

Measuring the voltage

To measure the voltage as said in the Guide, directly connect the power source to the the channel pins, although currently only the CH3 pin will show the most accurate results, work is going on other channel improvisation as well.

Figure (4) : Demonstration of Voltage measurement

 

And thus this is how the multimeter is used is used in the android app.  Of course there are still many many features such as capacitance measurements which are yet to be implemented and the work is going on them

Resources:

Continue ReadingUsing Multimeter in PSLab Android Application

Adding chrome custom tabs support for native browsing in Eventyay Organizer App

In Eventyay Organizer App when a user taps a URL, we face a choice: either launch a browser, or build our own in-app browser using WebViews.

Both options present challenges — launching the browser is a heavy context switch that isn’t customizable, while WebViews don’t share state with the browser and add maintenance overhead.

Chrome Custom Tabs give apps more control over their web experience, and make transitions between native and web content more seamless without having to resort to a WebView.

The first step for a Custom Tabs integration is adding the Custom Tabs Support Library to your project. Open your build.gradle file and add the support library to the dependency section. One must remember that Chrome Custom Tabs is not an Open Source library, so we are here including the  playStoreImplementation  as opposed to implementation, so that our FDroid build doesn’t fail.

Adding the dependency in build.gradle(app-level) in the project:

dependencies {
   //Other dependencies

   // Chrome Custom Tabs
   playStoreImplementation ‘com.android.support:customtabs:${versions.chromeCustomTabs}
}

 

And add this to  versions.gradle  file:

versions.chromeCustomTabs=‘27.1.0’

The UI Customizations are done by using the  CustomTabsIntent  and the CustomTabsIntent.Builder  classes; the performance improvements are achieved by using the  CustomTabsClient  to connect to the Custom Tabs service, warm-up Chrome and let it know which urls will be opened.

Since we need this to feature to be available for each place in the app that redirects to an external link, we must create it in the utility class of the project. Let’s name this class as BrowserUtils and do it as follows:

public final class BrowserUtils {
    private BrowserUtils() {
   }
    public static void launchUrl(Context context, String url) {
       CustomTabsIntent.Builder builder = new CustomTabsIntent.Builder();
       CustomTabsIntent customTabsIntent = builder.build();
       customTabsIntent.launchUrl(context, Uri.parse(url));
   }
}

Now all we need to do is replace the old method to create an intent:

findPreference(getString(R.string.privacy_policy_key)).setOnPreferenceClickListener(preference -> {
           Intent intent = new Intent(Intent.ACTION_VIEW);
           intent.setData(Uri.parse(PRIVACY_POLICY_URL));
           startActivity(intent);
           return true;
       });

with this call to the utility method:

findPreference(getString(R.string.privacy_policy_key)).setOnPreferenceClickListener(preference -> {
           BrowserUtils.launchUrl(getContext(), PRIVACY_POLICY_URL);
           return true;
       });

We can also add animation or customize the color of the toolbar, add action buttons or add animations like this:

builder.setToolbarColor(colorInt);
builder.setStartAnimations(this, R.anim.slide_in_right, R.anim.slide_out_left);

builder.setExitAnimations(this, R.anim.slide_in_left, R.anim.slide_out_right);

Here’s the result:

Resources

Continue ReadingAdding chrome custom tabs support for native browsing in Eventyay Organizer App

Using Transitions API with Email Validation in Open Event Organizer Android App

Transitions in Material Design apps provide visual continuity. As the user navigates the app, views in the app change state. Motion and transformation reinforce the idea that interfaces are tangible, connecting common elements from one view to the next.

In the Open Event Organizer Android App, we need a transition from the Get Started screen such that, if the user email is registered, we transition to the Login Screen otherwise, we Transition to the Sign Up Screen. And the transition should be such that the email field is continuously visible. One more condition is that, if the email field is even varied by one character, we need to transition back to the Get Started Screen.

To implement this, we need to use Shared Elements from the Android Transitions API.

What are shared elements?

A shared element transition determines how views that are present in two fragments transition between them. For example, an image that is displayed on an ImageView on both Fragment A and Fragment B transitions from A to B when B becomes visible.

Fade transition is used to fade a view and ChangeBounds transition is used to move a view without changing its size.

To make a transition, we use the setupTransitionAnimations() function like this:

(Note that we have created our own Fade and ChangeBounds transitions and not using XML)

   public void setupTransitionAnimation(Fragment fragment) {
       Fade exitFade = new Fade();
       exitFade.setDuration(100);
       this.setExitTransition(exitFade);
       fragment.setReturnTransition(exitFade);

       ChangeBounds changeBoundsTransition = new ChangeBounds();
       fragment.setSharedElementEnterTransition(changeBoundsTransition);

       Fade enterFade = new Fade();
       enterFade.setStartDelay(300);
       enterFade.setDuration(300);
       fragment.setEnterTransition(enterFade);
       this.setReenterTransition(enterFade);
   }

Now, in order to detect, if the email field is touched and even changed by one character, we use the TextWatcher like this:

       binding.emailLayout.getEditText().addTextChangedListener(new TextWatcher() {

           @Override
           public void beforeTextChanged(CharSequence s, int start, int count, int after) {
               //do nothing
           }

           @Override
           public void onTextChanged(CharSequence s, int start, int before, int count) {
               if (start != 0) {
                   sharedViewModel.setEmail(s.toString());
                   getFragmentManager().popBackStack();
               }
           }

           @Override
           public void afterTextChanged(Editable s) {
               //do nothing

           }

       }     

So basically, the moment the text is changed, we add the changed email to the SharedViewModel so that it can be used in the other fragments, and then to start the transition, we pop the back stack using getFragmentManager().popBackStack();

This is what the result looks like:

Resources

  • Google – Android developer blog post:

https://android-developers.googleblog.com/2018/02/continuous-shared-element-transitions.html

Continue ReadingUsing Transitions API with Email Validation in Open Event Organizer Android App

Option to Rename an Image in Phimpme Android Application

In the Phimpme Android application, users can perform various operations on images such as editing an image, sharing an image, moving the image to another folder, printing a pdf version of the image and many more. However, another important functionality that has been implemented is the option to rename an image. So in this blog post, I will be discussing how we achieved the functionality to rename an image.

Step 1

First, we need to add an option in the overflow menu to rename the image being viewed. The option to rename an image has been added by implementing the following lines of code in the  menu_view_pager.xml file.

<item
  android:id=“@+id/rename_photo”
  app:showAsAction=“never”
  android:title=“@string/Rename”/>

Step 2

Now after the user chooses the option to rename any image from the overflow menu, an alert dialog with an edittext would be displayed to the user with the old name and provide the user to add a new name for the photo. The dialog box with edittext has been implemented with the following lines of XML code.

<android.support.v7.widget.CardView
  android:layout_width=“match_parent”
  android:layout_height=“wrap_content”
  app:cardCornerRadius=“2dp”
  android:id=“@+id/description_dialog_card”>
  <ScrollView
      android:layout_width=“match_parent”
      android:layout_height=“match_parent”>
      <LinearLayout
          android:layout_width=“match_parent”
          android:layout_height=“wrap_content”
          android:orientation=“vertical”>
          <TextView
              android:id=“@+id/description_dialog_title”
              android:layout_width=“match_parent”
              android:textColor=“@color/md_dark_primary_text”
              android:layout_height=“wrap_content”
              android:background=“@color/md_dark_appbar”
              android:padding=“24dp”
              android:text=“@string/type_description”
              android:textSize=“18sp”
              android:textStyle=“bold” />
          <LinearLayout
              android:id=“@+id/rl_description_dialog”
              android:layout_width=“match_parent”
              android:layout_height=“wrap_content”
              android:orientation=“horizontal”
              android:padding=“15dp”>
              <EditText
                  android:id=“@+id/description_edittxt”
                  android:layout_width=“fill_parent”
                  android:layout_height=“wrap_content”
                  android:padding=“15dp”
                  android:hint=“@string/description_hint”
                  android:textColorHint=“@color/grey”
                  android:layout_margin=“20dp”
                  android:gravity=“top|left”
                  android:inputType=“textCapSentences|textMultiLine”
                  android:maxLength=“2000”
                  android:maxLines=“4”
                  android:selectAllOnFocus=“true”/>
              <ImageButton
                  android:layout_width=“@dimen/mic_image”
                  android:layout_height=“@dimen/mic_image”
                  android:layout_alignRight=“@+id/description_edittxt”
                  app2:srcCompat=“@drawable/ic_mic_black”
                  android:layout_gravity=“center”
                  android:background=“@color/transparent”
                  android:paddingEnd=“10dp”
                  android:paddingTop=“12dp”
                  android:id=“@+id/voice_input”/>
          </LinearLayout>
      </LinearLayout>
  </ScrollView>
</android.support.v7.widget.CardView>

The screenshot displaying the dialog with edittext is provided below.

Step 3

Now after retrieving the new name entered by the user for the photo we would extract the extension of the old name which can be .jpg, .png etc. Thereafter we’d need to create a new File object passing in the new path of the image(the path folder remains the same only the image name gets changed to a new one) as the constructor parameter. Now using the renameTo method of the File class the old image file can be renamed. However, with this rename operation the image reference would not be automatically updated in the MediaStore database. So we’d need to delete the old file path from the Android system MediaStore database which keeps a URI reference to all the media files present on the device. At last, we’d need to invoke the MediaScanner class to scan all the media files so that the new file path of the renamed image is scanned and is picked up by the MediaStore database. This can be done with the help of an action intent to initiate the media scan action. The code changes implemented to perform the above-mentioned operation is given below.

public void onClick(View dialog) {
      if (editTextNewName.length() != 0) {
          int index = file.getPath().lastIndexOf(“/”);
          String path = file.getPath().substring(0, index);
          File newname = new File(path + “/” + editTextNewName.getText().toString() + “.” +
                  imageextension);
          if(file.renameTo(newname)){
              ContentResolver resolver = getApplicationContext().getContentResolver();
              resolver.delete(
                      MediaStore.Images.Media.EXTERNAL_CONTENT_URI, MediaStore.Images.Media.DATA +
                              “=?”, new String[] { file.getAbsolutePath() });
              Intent intent = new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE);
              intent.setData(Uri.fromFile(newname));
              getApplicationContext().sendBroadcast(intent);
          }
          if(!allPhotoMode){
              int a = getAlbum().getCurrentMediaIndex();
              getAlbum().getMedia(a).setPath(newname.getPath());
          }else {
              listAll.get(current_image_pos).setPath(newname.getPath());
          }
          renameDialog.dismiss();
          SnackBarHandler.showWithBottomMargin(parentView, getString(R.string.rename_succes), navigationView
                  .getHeight());
      } else {
          SnackBarHandler.showWithBottomMargin(parentView, getString(R.string.insert_something),
                  navigationView.getHeight());
          editTextNewName.requestFocus();
      }
  }
});

This is how we have implemented the functionality to rename an image in the Phimpme Android application. To get the full source code, please refer to the Phimpme Android Github repository listed in the resource section below.

Resources

1.Android Developer documentation-https://developer.android.com/reference/java/io/File

2.Github-Phimpme Android Repository – https://github.com/fossasia/phimpme-android/

3.Renaming a file in java – http://stacktips.com/tutorials/java/how-to-delete-and-rename-a-file-in-java

 

Continue ReadingOption to Rename an Image in Phimpme Android Application