Sensor Data Logging in the PSLab Android App

The PSLab Android App allows users to log data from sensors connected to the PSLab hardware device. The Connected sensors should support I2C, SPI communication protocols to communicate with the PSLab device successfully. The only prerequisite is the additional support for the particular sensor plugin in Android App. The user can log data from various sensors and measure parameters like temperature, humidity, acceleration, magnetic field, etc. These parameters are useful in predicting and monitoring the environment and in performing many experiments.

The support for the sensor plugins was added during the porting python communication library code to Java. In this post,  we will discuss how we logged real time sensor data from the PSLab Hardware Device. We used Realm database to store the sensor data locally. We have taken the MPU6050 sensor as an example to understand the complete process of logging sensor data.

Creating Realm Object for MPU6050 Sensor Data

The MPU6050 sensor gives the acceleration and gyroscope readings along the three axes X, Y and Z. So the data object storing the readings of the mpu sensor have variables to store the acceleration and gyroscope readings along all three axes.

public class DataMPU6050 extends RealmObject {

   private double ax, ay, az;
   private double gx, gy, gz;
   private double temperature;

   public DataMPU6050() {  }

   public DataMPU6050(double ax, double ay, double az, double gx, double gy, double gz, double temperature) {
       this.ax = ax;
       this.ay = ay;
       this.az = az;
       this.gx = gx;
       this.gy = gy;
       this.gz = gz;
       this.temperature = temperature;
   }

  // getter and setter for all variables
}

Creating Runnable to Start/Stop Data Logging

To sample the sensor data at 500ms interval, we created a runnable object and passed it to another thread which would prevent lagging of the UI thread. We can start/stop logging by changing the value of the boolean loggingThreadRunning on button click. TaskMPU6050 is an AsyncTask which reads each sample of sensor data from the PSLab device, it gets executed inside a while loop which is controlled by boolean loggingThreadRunning. Thread.sleep(500) pauses the thread for 500ms, this is also one of the reason to transfer the logging to another thread instead of logging the sensor data in UI thread. If such 500ms delays are incorporated in UI thread, app experience won’t be smooth for the users.

Runnable loggingRunnable = new Runnable() {
   @Override
   public void run() {
       try {
           MPU6050 sensorMPU6050 = new MPU6050(i2c);
           while (loggingThreadRunning) {
               TaskMPU6050 taskMPU6050 = new TaskMPU6050(sensorMPU6050);
               taskMPU6050.execute();
              // use lock object to synchronize threads
               Thread.sleep(500);
           }
       } catch (IOException   InterruptedException e) {
           e.printStackTrace();
       }
   }
};

Sampling of Sensor Data

We created an AsyncTask to read each sample of the sensor data from the PSLab device in the background thread. The getRaw() method read raw values from the sensor and returned an ArrayList containing the acceleration and gyro values. After the values were read successfully, they were updated in the data card in the foreground which was visible to the user. This data card acts as a real-time screen for the user. All the samples read are appended to ArrayList mpu6050DataList, when the user clicks on button Save Data, the collected samples are saved to the local realm database.

private ArrayList<DataMPU6050> mpu6050DataList = new ArrayList<>();

private class TaskMPU6050 extends AsyncTask<Void, Void, Void> {

   private MPU6050 sensorMPU6050;
   private ArrayList<Double> dataMPU6050 = new ArrayList<>();

   TaskMPU6050(MPU6050 mpu6050) {
       this.sensorMPU6050 = mpu6050;
   }

   @Override
   protected Void doInBackground(Void... params) {
       try {
           dataMPU6050 = sensorMPU6050.getRaw();
       } catch (IOException e) {
           e.printStackTrace();
       }
       return null;
   }

   @Override
   protected void onPostExecute(Void aVoid) {
       super.onPostExecute(aVoid);
       // update data card TextViews with data read.
       DataMPU6050 tempObject = new DataMPU6050(dataMPU6050.get(0), dataMPU6050.get(1), dataMPU6050.get(2),
               dataMPU6050.get(4), dataMPU6050.get(5), dataMPU6050.get(6), dataMPU6050.get(3));
       mpu6050DataList.add(tempObject);
       synchronized (lock) {
           lock.notify();
       }
   }
}
Source: PSLab Android App

There is an option for Start/Stop Logging, clicking on which will change the value of boolean loggingThreadRunning which stops starts/stops the logging thread.

When the Save Data button is clicked, all the samples of sensor data collected from the  PSLab device till that point are saved to the local realm database.

realm.beginTransaction();
for (DataMPU6050 tempObject : mpu6050DataList) {
   realm.copyToRealm(tempObject);
}
realm.commitTransaction();

Data can also be written asynchronously to the local realm database. For other methods to write to a real database refer write section of Realm docs.

Resources

Continue ReadingSensor Data Logging in the PSLab Android App

Adding Sticky Headers for Grouping Sponsors List in Open Event Android App

The Open Event Android project has a fragment for showing sponsors of the event. Each Sponsor model has a name, url, type and level. The SponsorsFragment shows list according to type and level. Each sponsor list item has sponsor type TextView. There can be more than one sponsors with the same type. So instead of showing type in the Sponsor item we can add Sticky header showing type at the top which will group the sponsors with the same type and also gives the great UI. In this post I explain how to add the Sticky headers in the RecyclerView using StickyHeadersRecyclerView library.

1. Add dependency

In order to use Sticky Headers in your app add following dependencies in your app module’s build.gradle file.

dependencies {
	compile 'com.timehop.stickyheadersrecyclerview:library:0.4.3'
}

2. Create layout for header

Create recycler_view_header.xml file for the header. It will contain LinearLayout and simple TextView which will show Sponsor type.

<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:layout_width="match_parent"
    android:layout_height="wrap_content"
    android:orientation="vertical">

    <TextView
        android:id="@+id/recyclerview_view_header"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:padding="@dimen/padding_medium" />

</LinearLayout>

Here you can modify layout according to your need.

3.  Implement StickyRecyclerHeadersAdapter

Now implement StickyRecyclerHeadersAdapter in the List Adapter. Override getHeaderId(), onCreateHeaderViewHolder(), onBindHeaderViewHolder
() methods of the StickyRecyclerHeadersAdapter.

public class SponsorsListAdapter extends BaseRVAdapter<Sponsor, SponsorViewHolder> implements StickyRecyclerHeadersAdapter {
    ...

    @Override
    public long getHeaderId(int position) {...}

    @Override
    public RecyclerView.ViewHolder onCreateHeaderViewHolder(ViewGroup parent) {...}

    @Override
    public void onBindHeaderViewHolder(RecyclerView.ViewHolder holder, int position) {...}
}

 

The getHeaderId() method is used to give an id to the header. It is the main part of the implementation here all the sponsors with the same type should return the same id. In our case we are returning sponsor level because all the sponsor types have corresponding levels.

String level = getItem(position).getLevel();
return Long.valueOf(level);

 

The onCreateHeaderViewHolder() returns Recycler ViewHolder for the header. Here we will use in the inflate() method of  LayoutInflater to get View object of recycler_view_header.xml file. Then return new RecyclerView.ViewHolder object using View object.

View view = LayoutInflater.from(parent.getContext())
                .inflate(R.layout.recycler_view_header, parent, false);
return new RecyclerView.ViewHolder(view) {};

 

The onBindHeaderViewHolder() binds the sponsor to HeaderViewHolder. In this method we sets the sponsor type string to the TextView we have created in the recycler_view_header.xml file.

TextView textView = (TextView) holder.itemView.findViewById(R.id.recyclerview_view_header);
textView.setGravity(Gravity.CENTER_HORIZONTAL);

String sponsorType = getItem(position).getType();
if (!Utils.isEmpty(sponsorType))  
   textView.setText(sponsorType.toUpperCase());

Here you can also modify TextView according to your need. We are centering text using setGravity() method.

4.  Setup RecyclerView

Now create RecyclerView and set adapter using setAdapter() method. Also as we want the linear list of sponsors so set the LinearLayoutManager using setLayoutManager() method.

SponsorsListAdapter sponsorsListAdapter = new SponsorsListAdapter(getContext(), sponsors);
sponsorsRecyclerView.setAdapter(sponsorsListAdapter);
sponsorsRecyclerView.setLayoutManager(new LinearLayoutManager(getActivity()));

 

Create StickyRecyclerHeadersDecoration object and add it in the RecyclerView using addItemDecoration() method.

final StickyRecyclerHeadersDecoration headersDecoration = new StickyRecyclerHeadersDecoration(sponsorsListAdapter);

sponsorsRecyclerView.addItemDecoration(headersDecoration);
sponsorsListAdapter.registerAdapterDataObserver(new RecyclerView.AdapterDataObserver(){
    @Override
    public void onChanged {
            headersDecoration.invalidateHeaders();
    }
});

Now add AdapterDataObserver using registerAdapterDataObserver() method. The onChanged() method in this observer is called whenever dataset changes. So in this method invalidate headers using invalidateHeaders() method of HeaderDecoration.

Now we are all set. Run the app it will look like this.

Conclusion

Sticky headers in the App gives great UI and UX. You can also add a click listener to the headers. To know more about Sticky Headers follow the links given below.

Continue ReadingAdding Sticky Headers for Grouping Sponsors List in Open Event Android App

Preparing for Automatic Publishing of Android Apps in Play Store

I spent this week searching through libraries and services which provide a way to publish built apks directly through API so that the repositories for Android apps can trigger publishing automatically after each push on master branch. The projects to be auto-deployed are:

I had eyes on fastlane for a couple of months and it came out to be the best solution for the task. The tool not only allows publishing of APK files, but also Play Store listings, screenshots, and changelogs. And that is only a subset of its capabilities bundled in a subservice supply.

There is a process before getting started to use this service, which I will go through step by step in this blog. The process is also outlined in the README of the supply project.

Enabling API Access

The first step in the process is to enable API access in your Play Store Developer account if you haven’t done so. For that, you have to open the Play Dev Console and go to Settings > Developer Account > API access.

If this is the first time you are opening it, you’ll be presented with a confirmation dialog detailing about the ramifications of the action and if you agree to do so. Read carefully about the terms and click accept if you agree with them. Once you do, you’ll be presented with a setting panel like this:

Creating Service Account

As you can see there is no registered service account here and we need to create one. So, click on CREATE SERVICE ACCOUNT button and this dialog will pop up giving you the instructions on how to do so:

So, open the highlighted link in the new tab and Google API Console will open up, which will look something like this:

Click on Create Service Account and fill in these details:

Account Name: Any name you want

Role: Project > Service Account Actor

And then, select Furnish a new private key and select JSON. Click CREATE.

A new JSON key will be created and downloaded on your device. Keep this secret as anyone with access to it can at least change play store listings of your apps if not upload new apps in place of existing ones (as they are protected by signing keys).

Granting Access

Now return to the Play Console tab (we were there in Figure 2 at the start of Creating Service Account), and click done as you have created the Service Account now. And you should see the created service account listed like this:

Now click on grant access, choose Release Manager from Role dropdown, and select these PERMISSIONS:

Of course you don’t want the fastlane API to access financial data or manage orders. Other than that it is up to you on what to allow or disallow. Same choice with expiry date as we have left it to never expire. Click on ADD USER and you’ll see the Release Manager created in the user list like below:

Now you are ready to use the fastlane service, or any other release management service for that matter.

Using fastlane

Install fastlane by

sudo gem install fastlane

Go to your project folder and run

fastlane supply init

First it will ask the location of the private key JSON file you downloaded, and then the package name of the application you are trying to initialize fastlane for.

Then it will create metadata folder with listing information excluding the images. So you’ll have to download and place the images manually for the first time

After modifying the listing, images or APK, run the command:

fastlane supply run

That’s it. Your app along with the store listing has been updated!

This is a very brief introduction to the capabilities of the supply service. All interactive options can be supplied via command line arguments, certain parts of the metadata can be omitted and alpha beta management along with release rollout can be done in steps! Make sure to check out the links below:

Continue ReadingPreparing for Automatic Publishing of Android Apps in Play Store

Using Firebase Test Lab for Testing test cases of Phimpme Android

As now we started writing some test cases for Phimpme Android. While running my instrumentation test case, I saw a tab of Cloud Testing in Android Studio. This is for Firebase Test Lab. Firebase Test Lab provides cloud-based infrastructure for testing Android apps. Everyone doesn’t have every devices of all the android versions. But testing on all of them is equally important.

How I used test lab in Phimpme

  • Run your first test on Firebase

Select Test Lab in your project on the left nav on the Firebase console, and then click Run a Robo test. The Robo test automatically explores your app on wide array of devices to find defects and report any crashes that occur. It doesn’t require you to write test cases. All you need is the app’s APK. Nothing else is needed to use Robo test.

Upload your Application’s APK (app-debug-unaligned.apk) in the next screen and click Continue

Configure the device selection, a wide range of devices and all API levels are present there. You can save the template for future use.

Click on start test to start testing. It will start the tests and show the real time progress as well.

  • Using Firebase Test Lab from Android Studio

It required Android Studio 2.0+. You needs to edit the configuration of Android Instrumentation test.

Select the Firebase Test Lab Device Matrix under the Target. You can configure Matrix, matrix is actually on what virtual and physical devices do you want to run your test. See the below screenshot for details.

Note: You need to enable the firebase in your project

So using test lab on firebase we can easily test the test cases on multiple devices and make our app more scalable.

Resources:

Continue ReadingUsing Firebase Test Lab for Testing test cases of Phimpme Android

Image Loading in Open Event Organizer Android App using Glide

Open Event Organizer is an Android App for the Event Organizers and Entry Managers. Open Event API Server acts as a backend for this App. The core feature of the App is to scan a QR code from the ticket to validate an attendee’s check in. Other features of the App are to display an overview of sales and ticket management. As per the functionality, the performance of the App is very important. The App should be functional even on a weak network. Talking about the performance, the image loading part in the app should be handled efficiently as it is not an essential part of the functionality of the App. Open Event Organizer uses Glide, a fast and efficient image loading library created by Sam Judd. I will be talking about its implementation in the App in this blog.

First part is the configuration of the glide in the App. The library provides a very easy way to do that. Your app needs to implement a class named AppGlideModule using annotations provided by the library and it generates a glide API which can be used in the app for all the image loading stuff. The AppGlideModule implementation in the Orga App looks like:

@GlideModule
public final class GlideAPI extends AppGlideModule {

   @Override
   public void registerComponents(Context context, Glide glide, Registry registry) {
       registry.replace(GlideUrl.class, InputStream.class, new OkHttpUrlLoader.Factory());
   }

   // TODO: Modify the options here according to the need
   @Override
   public void applyOptions(Context context, GlideBuilder builder) {
       int diskCacheSizeBytes = 1024 * 1024 * 10; // 10mb
       builder.setDiskCache(new InternalCacheDiskCacheFactory(context, diskCacheSizeBytes));
   }

   @Override
   public boolean isManifestParsingEnabled() {
       return false;
   }
}

 

This generates the API named GlideApp by default in the same package which can be used in the whole app. Just make sure to add the annotation @GlideModule to this implementation which is used to find this class in the app. The second part is using the generated API GlideApp in the app to load images using URLs. Orga App uses data binding for layouts. So all the image loading related code is placed at a single place in DataBinding class which is used by the layouts. The class has a method named setGlideImage which takes an image view, an image URL, a placeholder drawable and a transformation. The relevant code is:

private static void setGlideImage(ImageView imageView, String url, Drawable drawable, Transformation<Bitmap> transformation) {
       if (TextUtils.isEmpty(url)) {
           if (drawable != null)
               imageView.setImageDrawable(drawable);
           return;
       }
       GlideRequest<Drawable> request = GlideApp
           .with(imageView.getContext())
           .load(Uri.parse(url));

       if (drawable != null) {
           request
               .placeholder(drawable)
               .error(drawable);
       }
       request
           .centerCrop()
           .transition(withCrossFade())
           .transform(transformation == null ? new CenterCrop() : transformation)
           .into(imageView);
   }

 

The method is very clear. First, the URL is checked for nullability. If null, the drawable is set to the imageview and method returns. Usage of GlideApp is simpler. Pass the URL to the GlideApp using the method with which returns a GlideRequest which has operators to set other required options like transitions, transformations, placeholder etc. Lastly, pass the imageview using into operator. By default, Glide uses HttpURLConnection provided by android to load the image which can be changed to use Okhttp using the extension provided by the library. This is set in the AppGlideModule implementation in the registerComponents method.

Links:
1. Documentation for Glide, an Image Loading Library
2. Documentation for Okhttp, an HTTP client for Android and Java Applications

Continue ReadingImage Loading in Open Event Organizer Android App using Glide

Fascinating Experiments with PSLab

PSLab can be extensively used in a variety of experiments ranging from the traditional electrical and electronics experiments to a number of innovative experiments. The PSLab desktop app and the Android app have all the essential features that are needed to perform the experiments. In addition to that there is a large collection of built-in experiments in both these experiments.

This blog is an extension to the blog post mentioned here. This blog lists some of the basic electrical and electronics experiments which are based on the same principles which are mentioned in the previous blog. In addition to that, some interesting and innovative experiments where PSLab can be used are also listed here. The experiments mentioned here require some prerequisite knowledge of electronic elements and basic circuit building. (The links mentioned at the end of the blog will be helpful in this case)

Op-Amp as an Inverting and a Non-Inverting Amplifier

There are two methods of doing this experiment. PSLab already has a built-in experiment dedicated to inverting and non-inverting amplification of op-amps. In the Android App, just navigate to Saved Experiments -> Electronics Experiments -> Op-Amp Circuits -> Inverting/ Non-Inverting. In case of the Desktop app, select Electronics Experiments from the main drop-down at the top of the window and select the Inverting/Non-inverting op-amp experiment.

This experiment can also performed using the basic features of PSLab. The only advantage of this methodology is that it allows much more tweaking of values to observe the Op-Amp behaviour in greater detail. However, the built-in experiment is good enough for most of the cases.

  • Construct the above circuits on a breadboard.
  • For the amplifier, connect the terminals of CH1 and GND of PSLab on the input side i.e. next to Vi and the terminals of CH2 and GND on the output side i.e next to Vo.
  • Usually, an Op-Amp like LM741 have a set of pins, one dedicated for the inverting input and the other dedicated for the non-inverting input. It is recommended to consult the datasheet of the Op-Amp IC used in order to get the pin number with which the input has to be connected.
  • The terminals of W1 and GND are also connected on the input side and they are used to generate a sine wave.
  • The resistors displayed in the figure have the values R1 = 10k and R2 = 51k. Resistance values other than these can also be considered. The gain of the op-amp would depend on the ratio of R2/R1, so it is better to consider values of R2 which are significantly larger than R1 in order to see the gain properly.
  • Use the PSLab Desktop App and open the Waveform Generator in Control. Set the wave type of W1 to Sine and set the frequency at 1 kHz and magnitude to 0.1 V. Then go ahead and open the Oscilloscope.
  • CH1 would display the input waveform and CH2 will display the output waveform and the plots can be observed.
  • If the input is connected to the inverting pin of the op-amp, the output obtained will be amplified and will have a phase difference of 90o with the input waveform whereas when the non-inverting pin is selected, the output is just amplified and no such phase difference is observed.
  • Note: Take proper care while connecting the V+ and V- pins of the op-amp, else the op-amp will be damaged.

Diode as an Integrator and Differentiator

An integrator in measurement and control applications is an element whose output signal is the time integral of its input signal. It accumulates the input quantity over a defined time to produce a representative output.

Integration is an important part of many engineering and scientific applications. Mechanical integrators are the oldest application, and are still used in such as metering of water flow or electric power. Electronic analogue integrators are the basis of analog computers and charge amplifiers. Integration is also performed by digital computing algorithms.

In electronics, a differentiator is a circuit that is designed such that the output of the circuit is approximately directly proportional to the rate of change (the time derivative) of the input. An active differentiator includes some form of amplifier. A passive differentiator circuit is made of only resistors and capacitors.

  • Construct the above circuits on a breadboard.
  • For both the circuits, connect the terminals of CH1 and GND of PSLab on the input side i.e. next to input voltage source and the terminals of CH2 and GND on the output side i.e next to Vo.
  • Ensure that the inverting and the non-inverting terminals of the op-amp are connected correctly. Check for the +/- signs in the diagram. ‘+’ corresponds to non-inverting and ‘-’ corresponds to inverting.
  • The terminals of W1 and GND are also connected on the input side and they are used to generate a sine wave.
  • The resistors displayed in the figure have the values R1 = 10k and R2 = 51k. Resistance values other than these can also be considered. The gain of the op-amp would depend on the ratio of R2/R1, so it is better to consider values of R2 which are significantly larger than R1 in order to see the gain properly.
  • Use the PSLab Desktop App and open the Waveform Generator in Control. Set the wave type of W1 to Sine and set the frequency at 1 kHz and magnitude to 5V (10V peak to peak). Then go ahead and open the Oscilloscope.
  • CH1 would display the input waveform and CH2 will display the output waveform and the plots can be observed.
  • If all the connections are made properly and the values of the parameters are set properly, then the waveform obtained should be as shown below.

Performing experiments involving ICs (Digital circuits)

The experiments mentioned so far including the ones mentioned in the previous blog post involved analog circuits and so they required features like the arbitrary waveform generator. However, digital circuits work using discrete values only. PSLab has the features needed to perform digital experiments which mainly involve the use of a square wave generator with a variable duty cycle.

PSLab board has dedicated pins named SQR1, SQR2, SQR3 and SQR4. The options for configuring these pins is present under the Advanced Control section in the Desktop app and in the Android app Applications->Control->Advanced. The options include selecting the pins which we want to use for digital outputs and then configuring the frequency and duty cycle of the square wave generated from that particular pin.

Innovative Experiments using PSLab

PSLab has quite a good number of interesting built-in experiments. These experiments can be found in the dropdown list at the top in the Desktop App and under the Saved Experiments header in the Android App. The built-in experiments come bundled with good quality documentation having circuit diagrams and detailed procedure to perform the experiments.

Some of the interesting experiments include:

  • Lemon Cell Experiment: In this experiment, the internal resistance and the voltage supplied by the lemon cell are measured.

 

  • Sound Beats: In acoustics, a beat is an interference pattern between two sounds of slightly different frequencies, perceived as a periodic variation in volume whose rate is the difference of the two frequencies.

 

  • When tuning instruments that can produce sustained tones, beats can readily be recognized. Tuning two tones to a unison will present a peculiar effect: when the two tones are close in pitch but not identical, the difference in frequency generates the beating.
  • This experiment requires producing two waves together of different frequencies and connecting them to the same oscilloscope channel. The pattern observed is shown below.

References:

  1. The previous blog on experiments using PSLab – https://blog.fossasia.org/electronics-experiments-with-pslab/
  2. More about op-amps and their characteristics – http://www.electronics-tutorials.ws/opamp/opamp_1.html
  3. Read more about differential and integrator circuits – https://www.allaboutcircuits.com/textbook/semiconductors/chpt-8/differentiator-integrator-circuits/
  4. Experiments involving digital circuits for reference – http://web.iitd.ac.in/~shouri/eep201/experiments.php
Continue ReadingFascinating Experiments with PSLab

Adding Static Code Analyzers in Open Event Orga Android App

This week, in Open Event Orga App project (Github Repo), we wanted to add some static code analysers that run on each build to ensure that the app code is free of potential bugs and follows a certain style. Codacy handles a few of these things, but it is quirky and sometimes produces false positives. Furthermore, it is not a required check for builds so errors can creep in gradually. We chose checkstyle, PMD and Findbugs for static analysis as they are most popular for Java. The area they work on kind of overlaps but gives security regarding code quality. Findbugs actually analyses the bytecode instead of source code to find possible JVM bugs.

Adding dependencies

The first step was to add the required dependencies. We chose the library android-check as it contained all 3 libraries and was focused on Android and easily configurable. First, we add classpath in project level build.gradle

dependencies {
   classpath 'com.noveogroup.android:check:1.2.4'
}

 

Then, we apply the plugin in app level build.gradle

apply plugin: 'com.noveogroup.android.check'

 

This much is enough to get you started, but by default, the build will not fail if any violations are found. To change this behaviour, we add this block in app level build.gradle

check {
   abortOnError true
}

 

There are many configuration options available for the library. Do check out the project github repo using the link provided above

Configuration

The default configuration is of easy level, and will be enough for most projects, but it is of course configurable. So we took the default hard configs for 3 analysers and disabled properties which we did not need. The place you need to store the config files is the config folder in either root project directory or the app directory. The name of the config file should be checkstyle.xml, pmd.xml and findbugs.xml

These are the default settings and you can obviously configure them by following the instructions on the project repo

Checkstyle

For checkstyle, you can find the easy and hard configuration here

The basic principle is that if you need to add a check, you include a module like this:

<module name="NewlineAtEndOfFile" />

 

If you want to modify the default value of some property, you do it like this:

<module name="RegexpSingleline">
   <property name="format" value="\s+$" />
   <property name="minimum" value="0" />
   <property name="maximum" value="0" />
   <property name="message" value="Line has trailing spaces." />
   <property name="severity" value="info" />
</module>

 

And if you want to remove a check, you can ignore it like this:

<module name="EqualsHashCode">
   <property name="severity" value="ignore" />
</module>

 

It’s pretty straightforward and easy to configure.

Findbugs

For findbugs, you can find the easy and hard configuration here

Findbugs configuration exists in the form of filters where we list resources it should skip analyzing, like:

<Match>
   <Class name="~.*\.BuildConfig" />
</Match>

 

If we want to ignore a particular pattern, we can do so like this:

<!-- No need to force hashCode for simple models -->
<Match>
   <Bug pattern="HE_EQUALS_USE_HASHCODE " />
</Match>

 

Sometimes, you’d want to only ignore a pattern only for certain files or fields. Findbugs supports regex to match such items:

<!-- Don't complain about rules in tests. -->
<Match>
   <Field name="~.*mockitoRule"/>
   <Bug pattern="URF_UNREAD_PUBLIC_OR_PROTECTED_FIELD" />
</Match>

 

You can also annotate your code to suppress warning in the particular class, mehod or field rather than disabling it for the whole project. For that, you need to add findbugs annotations dependency in the project

compile 'com.google.code.findbugs:findbugs-annotations:3.0.1'

 

And then use it like this:

@SuppressFBWarnings(
   value = "ICAST_IDIV_CAST_TO_DOUBLE",
   justification = "We want granularity to be integer")
public void showChart(LineChart lineChart) {
   ...
}

 

It also allows setting the justification of suppressing the rule for clarity

PMD

For findbugs, you can find the easy and hard configuration here

Like checkstyle, you have to first add a rule set to tell PMD which checks to perform:

<rule ref="rulesets/java/android.xml" />

 

If you want to modify the default value of the rule, you can do it like this:

<rule ref="rulesets/java/codesize.xml/TooManyMethods">
   <properties>
       <property name="maxmethods" value="15" />
   </properties>
</rule>

 

Or if you want to entirely exclude a rule, you can do it like this:

<rule ref="rulesets/java/basic.xml">
   <exclude name="OverrideBothEqualsAndHashcode" />
</rule>

 

PMD also supports suppressing warnings in the code itself using annotations. You don’t require any external libraries for it as it supports the in built java.lang.SuppessWarnings annotations. You can use it like this:

@SuppressWarnings("PMD.AvoidInstantiatingObjectsInLoops") // Entries cannot be created outside loop
private LineDataSet setData(Map<String, Long> map, String label) throws ParseException {
   ...
}

 

As you can see, we need to prepend “PMD.” to the rule name so that there are no clashes while annotation processing. Remember to comment the reason for suppressing the warning so that your co-developers know and can remove it in future if criteria does not meet anymore.

There is a lot more to learn about these static analyzers, which you can read upon in their official documentation:

Continue ReadingAdding Static Code Analyzers in Open Event Orga Android App

Avoiding Nested Callbacks using RxJS in Loklak Scraper JS

Loklak Scraper JS, as suggested by the name, is a set of scrapers for social media websites written in NodeJS. One of the most common requirement while scraping is, there is a parent webpage which provides links for related child webpages. And the required data that needs to be scraped is present in both parent webpage and child webpages. For example, let’s say we want to scrape quora user profiles matching search query “Siddhant”. The matching profiles webpage for this example will be https://www.quora.com/search?q=Siddhant&type=profile which is the parent webpage, and the child webpages are links of each matched profiles.

Now, a simplistic approach is to first obtain the HTML of parent webpage and then synchronously fetch the HTML of child webpages and parse them to get the desired data. The problem with this approach is that, it is slower as it is synchronous.

A different approach can be using request-promise-native to implement the logic in asynchronous way. But, there are limitations with this approach. The HTML of child webpages that needs to be fetched can only be obtained after HTML of parent webpage is obtained and number of child webpages are dynamic. So, there is a request dependency between parent and child i.e. if only we have data from parent webpage we can extract data from child webpages. The code would look like this

request(parent_url)
   .then(data => {
       ...
       request(child_url)
           .then(data => {
               // again nesting of child urls
           })
           .catch(error => {

           });
   })
   .catch(error => {

   });

 

Firstly, with this approach there is callback hell. Horrible, isn’t it? And then we don’t know how many nested callbacks to use as the number of child webpages are dynamic.

The saviour: RxJS

The solution to our problem is reactive extensions in JavaScript. Using rxjs we can obtain the required data without callback hell and asynchronously!

The promise-request object of the parent webpage is obtained. Using this promise-request object an observable is generated by using Rx.Observable.fromPromise. flatmap operator is used to parse the HTML of the parent webpage and obtain the links of child webpages. Then map method is used transform the links to promise-request objects which are again transformed into observables. The returned value – HTML – from the resulting observables is parsed and accumulated using zip operator. Finally, the accumulated data is subscribed. This is implemented in getScrapedData method of Quora JS scraper.

getScrapedData(query, callback) {
   // observable from parent webpage
   Rx.Observable.fromPromise(this.getSearchQueryPromise(query))
     .flatMap((t, i) => { // t is html of parent webpage
       // request-promise object of child webpages
       let profileLinkPromises = this.getProfileLinkPromises(t);
       // request-promise object to observable transformation
       let obs = profileLinkPromises.map(elem => Rx.Observable.fromPromise(elem));

       // each Quora profile is parsed
       return Rx.Observable.zip( // accumulation of data from child webpages
         ...obs,
         (...profileLinkObservables) => {
           let scrapedProfiles = [];
           for (let i = 0; i < profileLinkObservables.length; i++) {
             let $ = cheerio.load(profileLinkObservables[i]);
             scrapedProfiles.push(this.scrape($));
           }
           return scrapedProfiles; // accumulated data returned
         }
       )
     })
     .subscribe( // desired data is subscribed
       scrapedData => callback({profiles: scrapedData}),
       error => callback(error)
     );
 }

 

Resources:

Continue ReadingAvoiding Nested Callbacks using RxJS in Loklak Scraper JS

Posting Tweet from Loklak Wok Android

Loklak Wok Android is a peer harvester that posts collected tweets to the Loklak Server. Not only it is a peer harvester, but also provides users to post their tweets from the app. Images and location of the user can also be attached in the tweet. This blog explains

Adding Dependencies to the project

In app/build.gradle:

apply plugin: 'com.android.application'
apply plugin: 'me.tatarka.retrolambda'

android {
   ...
   packagingOptions {
       exclude 'META-INF/rxjava.properties'
   }
}

dependencies {
   ...
   compile 'com.google.code.gson:gson:2.8.1'

   compile 'com.squareup.retrofit2:retrofit:2.3.0'
   compile 'com.squareup.retrofit2:converter-gson:2.3.0'
   compile 'com.squareup.retrofit2:adapter-rxjava2:2.3.0'

   compile 'io.reactivex.rxjava2:rxjava:2.0.5'
   compile 'io.reactivex.rxjava2:rxandroid:2.0.1'
}

 

In build.gradle project level:

dependencies {
   classpath 'com.android.tools.build:gradle:2.3.3'
   classpath 'me.tatarka:gradle-retrolambda:3.2.0'
}

 

Implementation

User first authorize the application, so that they are able to post tweet from the app. For posting tweet statuses/update API endpoint of twitter is used and for attaching images with tweet media/upload API endpoint is used.

As, photos and location can be attached in a tweet, for Android Marshmallow and above we need to ask runtime permissions for camera, gallery and location. The related permissions are mentioned in Manifest file first

<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
// for location
<uses-feature android:name="android.hardware.location.gps"/>
<uses-feature android:name="android.hardware.location.network"/>

 

If, the device is using an OS below Android Marshmallow, there will be no runtime permissions, the user will be asked permissions at the time of installing the app.

Now, runtime permissions are asked, if the user had already granted the permission the related activity (camera, gallery or location) is started.

For camera permissions, onClickCameraButton is called

@OnClick(R.id.camera)
public void onClickCameraButton() {
   int permission = ContextCompat.checkSelfPermission(
           getActivity(), Manifest.permission.CAMERA);
   if (isAndroidMarshmallowAndAbove && permission != PackageManager.PERMISSION_GRANTED) {
       String[] permissions = {
               Manifest.permission.CAMERA,
               Manifest.permission.WRITE_EXTERNAL_STORAGE,
               Manifest.permission.READ_EXTERNAL_STORAGE
       };
       requestPermissions(permissions, CAMERA_PERMISSION);
   } else {
       startCameraActivity();
   }
}

 

To start the camera activity if the permission is already granted, startCameraActivity method is called

private void startCameraActivity() {
   Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
   File dir = getActivity().getExternalFilesDir(Environment.DIRECTORY_PICTURES);
   mCapturedPhotoFile = new File(dir, createFileName());
   Uri capturedPhotoUri = getImageFileUri(mCapturedPhotoFile);
   intent.putExtra(MediaStore.EXTRA_OUTPUT, capturedPhotoUri);
   startActivityForResult(intent, REQUEST_CAPTURE_PHOTO);
}

 

If the user decides to save the photo clicked from camera activity, the photo should be saved by creating a file and its uri is required to display the saved photo. The filename is created using createFileName method

private String createFileName() {
   String timeStamp = new SimpleDateFormat("ddMMyyyy_HHmmss").format(new Date());
   return "JPEG_" + timeStamp + ".jpg";
}

 

and uri is obtained using getImageFileUri

private Uri getImageFileUri(File file) {
   if (Build.VERSION.SDK_INT < Build.VERSION_CODES.N) {
       return Uri.fromFile(file);
   } else {
       return FileProvider.getUriForFile(getActivity(), "org.loklak.android.provider", file);
   }
}

 

Similarly, for the gallery, onClickGalleryButton method is implemented to ask runtime permissions and launch gallery activity if the permission is already granted.

@OnClick(R.id.gallery)
public void onClickGalleryButton() {
   int permission = ContextCompat.checkSelfPermission(
           getActivity(), Manifest.permission.WRITE_EXTERNAL_STORAGE);
   if (isAndroidMarshmallowAndAbove && permission != PackageManager.PERMISSION_GRANTED) {
       String[] permissions = {
               Manifest.permission.WRITE_EXTERNAL_STORAGE,
               Manifest.permission.READ_EXTERNAL_STORAGE
       };
       requestPermissions(permissions, GALLERY_PERMISSION);
   } else {
       startGalleryActivity();
   }
}

 

For starting the gallery activity, startGalleryActivity is used

private void startGalleryActivity() {
   Intent intent = new Intent();
   intent.setType("image/*");
   intent.setAction(Intent.ACTION_GET_CONTENT);
   intent.putExtra(Intent.EXTRA_ALLOW_MULTIPLE, true);
   startActivityForResult(
           Intent.createChooser(intent, "Select images"), REQUEST_GALLERY_MEDIA_SELECTION);
}

 

And finally for location onClickAddLocationButton is implemented

@OnClick(R.id.location)
public void onClickAddLocationButton() {
   int permission = ContextCompat.checkSelfPermission(
           getActivity(), Manifest.permission.ACCESS_FINE_LOCATION);
   if (isAndroidMarshmallowAndAbove && permission != PackageManager.PERMISSION_GRANTED) {
       String[] permissions = {Manifest.permission.ACCESS_FINE_LOCATION};
       requestPermissions(permissions, LOCATION_PERMISSION);
   } else {
       getLatitudeLongitude();
   }
}

 

If, the permission is already granted getLatitudeLongitude is called. Using LocationManager last known location is tried to obtain, if there is no last known location, current location is requested using a LocationListener.

private void getLatitudeLongitude() {
   mLocationManager =
           (LocationManager) getActivity().getSystemService(Context.LOCATION_SERVICE);

   // last known location from network provider
   Location location = mLocationManager.getLastKnownLocation(LocationManager.NETWORK_PROVIDER);
   if (location == null) { // last known location from gps
       location = mLocationManager.getLastKnownLocation(LocationManager.GPS_PROVIDER);
   }

   if (location != null) { // last known loaction available
       mLatitude = location.getLatitude();
       mLongitude = location.getLongitude();
       setLocation();
   } else { // last known location not available
       mLocationListener = new TweetLocationListener();
       // current location requested
       mLocationManager.requestLocationUpdates("gps", 1000, 1000, mLocationListener);
   }
}

 

TweetLocationListener implements a LocationListener that provides the current location. If GPS is disabled, settings is launched so that user can enable GPS. This is implemented in onProviderDisabled callback of the listener.

private class TweetLocationListener implements LocationListener {

   @Override
   public void onLocationChanged(Location location) {
       mLatitude = location.getLatitude();
       mLongitude = location.getLongitude();
       setLocation();
   }

   @Override
   public void onStatusChanged(String s, int i, Bundle bundle) {

   }

   @Override
   public void onProviderEnabled(String s) {

   }

   @Override
   public void onProviderDisabled(String s) {
       Intent intent = new Intent(Settings.ACTION_LOCATION_SOURCE_SETTINGS);
       startActivity(intent);
   }
}

 

If the user is asked for permissions, onRequestPermissionResult callback is invoked, if the permission is granted then the respective activities are opened or latitude and longitude are obtained.

@Override
public void onRequestPermissionsResult(
       int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
   boolean isResultGranted = grantResults[0] == PackageManager.PERMISSION_GRANTED;
   switch (requestCode) {
       case CAMERA_PERMISSION:
           if (grantResults.length > 0 && isResultGranted) {
               startCameraActivity();
           }
           break;
       case GALLERY_PERMISSION:
           if (grantResults.length > 0 && isResultGranted) {
               startGalleryActivity();
           }
           break;
       case LOCATION_PERMISSION:
           if (grantResults.length > 0 && isResultGranted) {
               getLatitudeLongitude();
           }
   }
}

 

Since, the camera and gallery activities are started to obtain a result i.e. photo(s). So, onActivityResult callback is called

@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
   switch (requestCode) {
       case REQUEST_CAPTURE_PHOTO:
           if (resultCode == Activity.RESULT_OK) {
               onSuccessfulCameraActivityResult();
           }
           break;
       case REQUEST_GALLERY_MEDIA_SELECTION:
           if (resultCode == Activity.RESULT_OK) {
               onSuccessfulGalleryActivityResult(data);
           }
           break;
       default:
           super.onActivityResult(requestCode, resultCode, data);
   }
}

 

If the result of Camera activity is success i.e. the image is saved by the user. The saved image is displayed in a RecyclerView in TweetPostingFragment. This is implemented in onSuccessfulCameraActivityResult mehtod

private void onSuccessfulCameraActivityResult() {
   tweetMultimediaContainer.setVisibility(View.VISIBLE);
   Bitmap bitmap = BitmapFactory.decodeFile(mCapturedPhotoFile.getAbsolutePath());
   mTweetMediaAdapter.clearAdapter();
   mTweetMediaAdapter.addBitmap(bitmap);
}

 

For a gallery activity, if a single image is selected then the uri of image can be obtained using getData method of an Intent. If multiple images are selected, the uri of images are stored in ClipData. After uris of images are obtained, it is checked if more than 4 images are selected as Twitter allows at most 4 images in a tweet. If more than 4 images are selected than the uris of extra images are removed. Using the uris of the images, the file is obtained and then from file Bitmap is obtained which is displayed in RecyclerView. This is implemented in onSuccessfulGalleryActivityResult

private void onSuccessfulGalleryActivityResult(Intent intent) {
   tweetMultimediaContainer.setVisibility(View.VISIBLE);
   Context context = getActivity();

   // get uris of selected images
   ClipData clipData = intent.getClipData();
   List<Uri> uris = new ArrayList<>();
   if (clipData != null) {
       for (int i = 0; i < clipData.getItemCount(); i++) {
           ClipData.Item item = clipData.getItemAt(i);
           uris.add(item.getUri());
       }
   } else {
       uris.add(intent.getData());
   }

   // remove of more than 4 images
   int numberOfSelectedImages = uris.size();
   if (numberOfSelectedImages > 4) {
       while (numberOfSelectedImages-- > 4) {
           uris.remove(numberOfSelectedImages);
       }
       Utility.displayToast(mToast, context, moreImagesMessage);
   }

   // get bitmap from uris of images
   List<Bitmap> bitmaps = new ArrayList<>();
   for (Uri uri : uris) {
       String filePath = FileUtils.getPath(context, uri);
       Bitmap bitmap = BitmapFactory.decodeFile(filePath);
       bitmaps.add(bitmap);
   }

   // display images in RecyclerView
   mTweetMediaAdapter.setBitmapList(bitmaps);
}

 

Now, to post images with tweet, first the ID of the image needs to be obtained using media/upload API endpoint, a multipart post request and then the obtained ID(s) is passed as the value of “media_ids” in statuses/update API endpoint. Since, there can be more than one image, a single observable is created for each image. The bitmap is converted to raw bytes for the multipart post request. As the process includes a network request and converting bitmap to bytes – a heavy resource consuming task which shouldn’t be on the main thread -, so an observable is created for the same as a result of which the tasks are performed concurrently i.e. in a separate thread.

private Observable<String> getImageId(Bitmap bitmap) {
   return Observable
           .defer(() -> {
               // convert bitmap to bytes
               ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
               bitmap.compress(Bitmap.CompressFormat.JPEG, 100, byteArrayOutputStream);
               byte[] bytes = byteArrayOutputStream.toByteArray();
               RequestBody mediaBinary = RequestBody.create(MultipartBody.FORM, bytes);
               return Observable.just(mediaBinary);
           })
           .flatMap(mediaBinary -> mTwitterMediaApi.getMediaId(mediaBinary, null))
           .flatMap(mediaUpload -> Observable.just(mediaUpload.getMediaIdString()))
           .subscribeOn(Schedulers.newThread());
}

 

The tweet is posted when the “Tweet” button is clicked by invoking onClickTweetPostButton mehtod

@OnClick(R.id.tweet_post_button)
public void onClickTweetPostButton() {
   String status = tweetPostEditText.getText().toString();

   List<Bitmap> bitmaps = mTweetMediaAdapter.getBitmapList();
   List<Observable<String>> mediaIdObservables = new ArrayList<>();
   for (Bitmap bitmap : bitmaps) { // observables for images is created
       mediaIdObservables.add(getImageId(bitmap));
   }

   if (mediaIdObservables.size() > 0) {
       // Post tweet with image
       postImageAndTextTweet(mediaIdObservables, status);
   } else if (status.length() > 0) {
       // Post text only tweet
       postTextOnlyTweet(status);
   } else {
       Utility.displayToast(mToast, getActivity(), tweetEmptyMessage);
   }
}

 

Tweet containing images are posted by calling postImageAndTextTweet, once the tweet data is obtained, the data is cross posted to loklak server. The image IDs are obtained concurrently by using the zip operator.

private void postImageAndTextTweet(List<Observable<String>> imageIdObservables, String status) {
   mProgressDialog.show();
   ConnectableObservable<StatusUpdate> observable = Observable.zip(
           imageIdObservables,
           mediaIdArray -> {
               String mediaIds = "";
               for (Object mediaId : mediaIdArray) {
                   mediaIds = mediaIds + String.valueOf(mediaId) + ",";
               }
               return mediaIds.substring(0, mediaIds.length() - 1);
           })
           .flatMap(imageIds -> mTwitterApi.postTweet(status, imageIds, mLatitude, mLongitude))
           .subscribeOn(Schedulers.io())
           .publish();

   Disposable postingDisposable = observable
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread())
           .subscribe(this::onSuccessfulTweetPosting, this::onErrorTweetPosting);
   mCompositeDisposable.add(postingDisposable);

   // cross posting to loklak server   
   Disposable crossPostingDisposable = observable
           .flatMap(this::pushTweetToLoklak)
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread())
           .subscribe(
                   push -> {},
                   t -> Log.e(LOG_TAG, "Cross posting failed: " + t.toString())
           );
   mCompositeDisposable.add(crossPostingDisposable);

   Disposable publishDisposable = observable.connect();
   mCompositeDisposable.add(publishDisposable);
}

 

In case of only text tweets, the text is obtained from editText and mediaIds are passed as null. And once the tweet data is obtained it is cross posted to loklak_server. This is executed by calling postTextOnlyTweet

private void postTextOnlyTweet(String status) {
   mProgressDialog.show();
   ConnectableObservable<StatusUpdate> observable =
           mTwitterApi.postTweet(status, null, mLatitude, mLongitude)
           .subscribeOn(Schedulers.io())
           .publish();

   Disposable postingDisposable = observable
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread())
           .subscribe(this::onSuccessfulTweetPosting, this::onErrorTweetPosting);
   mCompositeDisposable.add(postingDisposable);


   // cross posting to loklak server
   Disposable crossPostingDisposable = observable
           .flatMap(this::pushTweetToLoklak)
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread())
           .subscribe(
                   push -> Log.e(LOG_TAG, push.getStatus()),
                   t -> Log.e(LOG_TAG, "Cross posting failed: " + t.toString())
           );
   mCompositeDisposable.add(crossPostingDisposable);

   Disposable publishDisposable = observable.connect();
   mCompositeDisposable.add(publishDisposable);
}

 

Resources

Continue ReadingPosting Tweet from Loklak Wok Android

Implementing 3 legged Authorization in Loklak Wok Android for Twitter

Loklak Wok Android is a peer harvester that posts collected tweets to the Loklak Server. Not only it is a peer harvester, but also provides users to post their tweets from the app. Posting tweets from the app requires users to authorize the Loklak Wok app, the client app created https://apps.twitter.com/ . This blog explains in detail about the authorization process.

Adding Dependencies to the project

In app/build.gradle:

apply plugin: 'com.android.application'
apply plugin: 'me.tatarka.retrolambda'

android {
   ...
   packagingOptions {
       exclude 'META-INF/rxjava.properties'
   }
}

dependencies {
   ...
   compile 'com.google.code.gson:gson:2.8.1'

   compile 'com.squareup.retrofit2:retrofit:2.3.0'
   compile 'com.squareup.retrofit2:converter-gson:2.3.0'
   compile 'com.squareup.retrofit2:adapter-rxjava2:2.3.0'

   compile 'io.reactivex.rxjava2:rxjava:2.0.5'
   compile 'io.reactivex.rxjava2:rxandroid:2.0.1'
}

 

In build.gradle project level:

dependencies {
   classpath 'com.android.tools.build:gradle:2.3.3'
   classpath 'me.tatarka:gradle-retrolambda:3.2.0'
}

 

Steps of Authorization

Step 1: Create client app in Twitter

Create a twitter client app at https://apps.twitter.com/. Provide the mandatory entries and also Callback url (would be used in next steps). Then go to “Keys and Access Token” and save your consumer key and consumer secret. In case you want to use Twitter API for yourself, click on “Create my access token”, which provides access token and access token secret.

Step 2: Obtaining a request token

Using the “consumer key” and “consumer secret” request token is obtained by sending a POST request to oauth/request_token. As Twitter API are Oauth1 based the sent request needs to be signed by generating oauth_signature. The oauth_signature is generated by intercepting the network request sent by retrofit rest API client, the oauth interceptor used in Loklak Wok Android is a modified version of this snippet. The retrofit TwitterAPI interface is defined

public interface TwitterAPI {

   String BASE_URL = "https://api.twitter.com/";

   @POST("/oauth/request_token")
   Observable<ResponseBody> getRequestToken();

   @FormUrlEncoded
   @POST("/oauth/access_token")
   Observable<ResponseBody> getAccessTokenAndSecret(@Field("oauth_verifier") String oauthVerifier);
}

 

And the retrofit REST client is implemented in TwitterRestClient. createTwitterAPIWithoutAccessToken method returns a twitter API client which can be called without providing access keys, this is used as we don’t have access tokens right now.

public static TwitterAPI createTwitterAPIWithoutAccessToken() {
   if (sWithoutAccessTokenRetrofit == null) {
       sLoggingInterceptor.setLevel(HttpLoggingInterceptor.Level.BODY);
       // uncomment to debug network requests
       // sWithoutAccessTokenClient.addInterceptor(sLoggingInterceptor);
       sWithoutAccessTokenRetrofit = sRetrofitBuilder
               .client(sWithoutAccessTokenClient.build()).build();
   }
   return sWithoutAccessTokenRetrofit.create(TwitterAPI.class);
}

 

So, getRequestToken method is used to obtain the request token, if the request is successful oauth_token is returned.

@OnClick(R.id.twitter_authorize)
public void onClickTwitterAuthorizeButton(View view) {
   mTwitterApi.getRequestToken()
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread())
           .subscribe(this::parseRequestTokenResponse, this::onFetchRequestTokenError);
}

 

Step 3: Redirecting the user

Using the oauth_token obtained in Step 2, the user is redirected to login page using WebView.

private void setAuthorizationView() {
   ...
   webView.setVisibility(View.VISIBLE);
   webView.loadUrl(mAuthorizationUrl);
}

 

A WebView client is created by extending WebViewClient, this is used to keep track of which webpage is opened by overriding shouldOverrideUrlLoading.

@Override
public boolean shouldOverrideUrlLoading(WebView view, String url) {
   if (url.contains("github")) {
       String[] tokenAndVerifier = url.split("&");
       mOAuthVerifier = tokenAndVerifier[1].substring(tokenAndVerifier[1].indexOf('=') + 1);
       getAccessTokenAndSecret();
       return true;
   }
   return false;
}

 

As the link provided in callback url while creating our twitter app is a github page. The WebViewClient checks if it is a github page or not. If yes, then it parses the oauth_verifier from the github url.

Step 4: Converting the request token to an access token

A new rest client is created using the access token obtained in step 2, as implemented in createTwitterAPIWithAccessToken method.

public static TwitterAPI createTwitterAPIWithAccessToken(String token) {
   TwitterOAuthInterceptor withAccessTokenInterceptor =
           sInterceptorBuilder.accessToken(token).accessSecret("").build();
   OkHttpClient withAccessTokenClient = new OkHttpClient.Builder()
           .addInterceptor(withAccessTokenInterceptor)
           //.addInterceptor(loggingInterceptor) // uncomment to debug network requests
           .build();
   Retrofit withAccessTokenRetrofit = sRetrofitBuilder.client(withAccessTokenClient).build();
   return withAccessTokenRetrofit.create(TwitterAPI.class);
}

 

Now, to obtain access token and access token secret oauth_verifier obtained in step 3 is passed as a parameter to getAccessTokenAndSecret method defined in TwitterAPI interface which calls oauth/access_token endpoint from the rest client created above. This is implemented in getAccessTokenAndSecret method of WebViewClient class

private void getAccessTokenAndSecret() {
   mTwitterApi = TwitterRestClient.createTwitterAPIWithAccessToken(mOauthToken);
   mTwitterApi.getAccessTokenAndSecret(mOAuthVerifier)
           .flatMap(this::saveAccessTokenAndSecret)
           ....
}

 

Finally the obtained access_token and access_token_secret is saved in SharedPreference so that it can be used to call other Twitter API endpoints as in saveAccessTokenAndSecret

private Observable<Integer> saveAccessTokenAndSecret(ResponseBody responseBody)
       throws IOException {
   String[] responseValues = responseBody.string().split("&");

   String token = responseValues[0].substring(responseValues[0].indexOf("=") + 1);
   SharedPrefUtil.setSharedPrefString(getActivity(), OAUTH_ACCESS_TOKEN_KEY, token);
   mOauthToken = token; // here access_token that would be used for API calls

   String tokenSecret = responseValues[1].substring(responseValues[1].indexOf("=") + 1);
   SharedPrefUtil.setSharedPrefString(
           getActivity(), OAUTH_ACCESS_TOKEN_SECRET_KEY, tokenSecret);
   mOauthTokenSecret = tokenSecret;
   return Observable.just(1);
}

 

Resources:

Continue ReadingImplementing 3 legged Authorization in Loklak Wok Android for Twitter