Using Order Endpoints in Open Event API Server

The main feature i.e., Ordering API is added into API server. These endpoints provide the ability to work with the ordering system. This API is not simple like other as it checks for the discount codes and various other things as well.
The process in layman terms is very simple, first, a user must be registered or added as an attendee into Server without any order_id associated and then the attendee details will be sent to API server as a relationship.

Things needed to take care:

  1. Validating the discount code and ensure it is not exhausted
  2. Calculating the total amount on the server side by applying coupon
  3. Do not calculate amount if the user is the event admin
  4. Do not use coupon if user is event admin
  5. Handling payment modes and generating payment links
  6. Ensure that default status is always pending, unless the user is event admin

Creating Order

    • Prerequisite
      Before initiating the order, attendee records needs to be created associated with the event. These records will not have any order_id associated with them initially. The Order API will add the relationships.
    • Required Body
      Order API requires you to send event relationship and attendee records to create order_tickets
    • Permissions
      Only organizers can provide custom amount and status. Others users will get their status as pending and amount will be recalculated in server. The response will reflect the calculated amount and updated status.
      Also to initiate any order, user must be logged in. Guest can not create any order
    • Payment Modes
      There are three payment modes, free, stripe and paypal. If payment_mode is not provided then API will consider it as “free”.
    • Discount Codes
      Discount code can be sent as a relationship to the API. The Server will validate the code and will act accordingly.

Validating Discount Codes

Discount codes are checked to ensure they are valid, first check ensures that the user is not co-organizer

# Apply discount only if the user is not event admin
if data.get('discount') and not has_access('is_coorganizer', event_id=data['event']):

Second, check ensures that the discount code is active

if not discount_code.is_active:
  raise UnprocessableEntity({'source': 'discount_code_id'}, "Inactive Discount Code")

The third, Check ensures its validity is not expired

if not (valid_from <= now <= valid_till):
  raise UnprocessableEntity({'source': 'discount_code_id'}, "Inactive Discount Code")

Fourth Check ensure that the quantity is not exhausted

if not TicketingManager.match_discount_quantity(discount_code, data['ticket_holders']):
  raise UnprocessableEntity({'source': 'discount_code_id'}, 'Discount Usage Exceeded')

Lastly, the fifth check ensures that event id matches with given discount associated event

if discount_code.event.id != data['event'] and discount_code.user_for == TICKET:
  raise UnprocessableEntity({'source': 'discount_code_id'}, "Invalid Discount Code")

Calculating Order Amount

The next important thing is to recalculate the order amount and it will calculated only if user is not the event admin

if not has_access('is_coorganizer', **view_kwargs):
  TicketingManager.calculate_update_amount(order)

API Response

The API response apart from general fields will provide you the payment-url depending upon the payment mode you selected.

  • Stripe : will give payment-url as stripe
  • Paypal: will provide the payment completing url in payment-url

This all explains the flow and requirements to create an order. Order API consists of many more things related with TIcketing Manager which works to create the payment url and apply discount count as well as calculate the total order amount.

Resources

  1. Stripe Payments API Docs
    https://stripe.com/docs/api
  2. Paypal Payments API docs
    https://developer.paypal.com/docs/api/
  3. Paypal Sandbox docs
    https://developer.paypal.com/docs/classic/lifecycle/ug_sandbox/

 

Continue ReadingUsing Order Endpoints in Open Event API Server

Sorting Users and Implementing Animations on SUSI Web Chat Team Page

While we were developing the chat application, we wanted to show details of Developers.  So we planned to build a team page for SUSI Web Chat Application. In this post I discuss few things we built there. Like sorting feature, animations of the page, usage of Material UI.

First we made an array of objects to store user details. In that array we grouped them in sub arrays so we can refer them in different sections separately. We stored following data in “TeamList.js” in an array.

var team = [{
 'mentors': [{
   'name': 'Mario Behling',
   'github': 'http://github.com/mariobehling',
   'avatar': 'http://fossasia.org/img/mariobehling.jpg',
   'twitter': 'https://twitter.com/mariobehling',
   'linkedin': 'https://www.linkedin.com/in/mariobehling/',
   'blog': '#'
 }]
},{ 'server': [{
    }]
}

There was a requirement to sort developers by their name so we had to build a way to sort out array data which are in main array. This is how we built that.
The function we are going to use for sorting.

   function compare(a, b) {
     if (a.name < b.name) { return -1; }
     if (a.name > b.name) { return 1; }
     return 0;
   }

This is how we have used it to sort developers.

import team from './TeamList';
team[1].server.sort(compare);

In this function we took values of object two by two and compared.
Now we have to show these sorted information on view.
Extract data that we want from array and we used material UI Cards to show these data on view.
This is how we extracted data from the array.

   team[1].server.sort(compare);
   let server = team[1].server.map((serv, i) => {
     return ( <Card className='team-card' key={i}>
         <CardMedia className="container" >
           <img src={serv.avatar} alt={serv.name} className="image" />
           <div className="overlay" >
             <div className="text"> <FourButtons member={serv} /> </div>
           </div>
         </CardMedia>
         <CardTitle title={serv.name} subtitle={serv.designation} />
       </Card>)   })

Now it shows data on the view.
“” contains an image of the member. We need to show social media links of the user on mouseover. We did that using simple CSS. I added a block comment around those particular styles. Check those styles here.

.overlay {
 position: absolute;
 bottom: 100%;
 left: 0;
 right: 0;
 background-color: #000;
 overflow: hidden;
 width: 100%;
 height:0;
 transition: .3s ease;
 opacity:0;
}
.container:hover .overlay {
 bottom: 0;
 height: 100%;
 opacity:0.7;
}

Above lines show that how we made the animation of the overlay animation.

Now we want to show social media buttons on the overlay. We made another separate component for buttons and return those like this.

render() {
       let member= this.props.member;
       return (<div>
         <CardActions>
           <IconButton href={member.github} target="_blank" >
  <CardActions>
		</div>)}

Finally we showed those data on Team page. We returned these JSX from render(){} method.

         <div className="team-header">
           <div className="support__heading">Developers</div>
         </div>
         <div className='team-container'>{server}</div>

I have mentioned few resources which I used to implement these features. If you are willing to contribute SUSI AI Web Chat application. Fork our repository on github.

Resources

Documentation of Array.sort https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort

How to use Image overlay CSS effects: https://www.w3schools.com/howto/howto_css_image_overlay.asp

Continue ReadingSorting Users and Implementing Animations on SUSI Web Chat Team Page

Preparing for Automatic Publishing of Android Apps in Play Store

I spent this week searching through libraries and services which provide a way to publish built apks directly through API so that the repositories for Android apps can trigger publishing automatically after each push on master branch. The projects to be auto-deployed are:

I had eyes on fastlane for a couple of months and it came out to be the best solution for the task. The tool not only allows publishing of APK files, but also Play Store listings, screenshots, and changelogs. And that is only a subset of its capabilities bundled in a subservice supply.

There is a process before getting started to use this service, which I will go through step by step in this blog. The process is also outlined in the README of the supply project.

Enabling API Access

The first step in the process is to enable API access in your Play Store Developer account if you haven’t done so. For that, you have to open the Play Dev Console and go to Settings > Developer Account > API access.

If this is the first time you are opening it, you’ll be presented with a confirmation dialog detailing about the ramifications of the action and if you agree to do so. Read carefully about the terms and click accept if you agree with them. Once you do, you’ll be presented with a setting panel like this:

Creating Service Account

As you can see there is no registered service account here and we need to create one. So, click on CREATE SERVICE ACCOUNT button and this dialog will pop up giving you the instructions on how to do so:

So, open the highlighted link in the new tab and Google API Console will open up, which will look something like this:

Click on Create Service Account and fill in these details:

Account Name: Any name you want

Role: Project > Service Account Actor

And then, select Furnish a new private key and select JSON. Click CREATE.

A new JSON key will be created and downloaded on your device. Keep this secret as anyone with access to it can at least change play store listings of your apps if not upload new apps in place of existing ones (as they are protected by signing keys).

Granting Access

Now return to the Play Console tab (we were there in Figure 2 at the start of Creating Service Account), and click done as you have created the Service Account now. And you should see the created service account listed like this:

Now click on grant access, choose Release Manager from Role dropdown, and select these PERMISSIONS:

Of course you don’t want the fastlane API to access financial data or manage orders. Other than that it is up to you on what to allow or disallow. Same choice with expiry date as we have left it to never expire. Click on ADD USER and you’ll see the Release Manager created in the user list like below:

Now you are ready to use the fastlane service, or any other release management service for that matter.

Using fastlane

Install fastlane by

sudo gem install fastlane

Go to your project folder and run

fastlane supply init

First it will ask the location of the private key JSON file you downloaded, and then the package name of the application you are trying to initialize fastlane for.

Then it will create metadata folder with listing information excluding the images. So you’ll have to download and place the images manually for the first time

After modifying the listing, images or APK, run the command:

fastlane supply run

That’s it. Your app along with the store listing has been updated!

This is a very brief introduction to the capabilities of the supply service. All interactive options can be supplied via command line arguments, certain parts of the metadata can be omitted and alpha beta management along with release rollout can be done in steps! Make sure to check out the links below:

Continue ReadingPreparing for Automatic Publishing of Android Apps in Play Store

Deploying preview using surge in Yaydoc

In Yaydoc, we save the preview of the documentation in our local server and then we show the preview using express’s static serve method. But the problem is that Heroku doesn’t support persistent server, so our preview link gets expired within a few minutes. In order to solve the problem I planned to deploy the preview to surge so that the preview doesn’t get expired. For that I made a shell script which will deploy preview to the surge and then I’ll invoke the shell script using child_process.

#!/bin/bash

while getopts l:t:e:u: option
do
 case "${option}"
 in
 l) LOGIN=${OPTARG};;
 t) TOKEN=${OPTARG};;
 e) EMAIL=${OPTARG};;
 u) UNIQUEID=${OPTARG};;
 esac
done

export SURGE_LOGIN=${LOGIN}
export SURGE_TOKEN=${TOKEN}

./node_modules/.bin/surge --project temp/${EMAIL}/${UNIQUEID}_preview --domain ${UNIQUEID}.surge.sh

In the above snippet, I’m initializing the SURGE_LOGIN and SURGE_TOKEN environmental value, so that surge will deploy to the preview without asking any credentials while I am deploying the project. Then I’m executing surge by specifying the preview path and preview domain name.

exports.deploySurge = function(data, surgeLogin, surgeToken, callback) {
  var args = [
    "-l", surgeLogin,
    "-t", surgeToken,
    "-e", data.email,
    "-u", data.uniqueId
  ];

  var spawnedProcess = spawn('./surge_deploy.sh', args);
  spawnedProcess.on('exit', function(code) {
    if (code === 0) {
      callback(null, {description: 'Deployed successfully'});
    } else {
      callback({description: 'Unable to deploy'}, null);
    }
  });
}

Whenever the user generates documentation, I’ll invoke the shell script using child_process and then if it exits with exit code 0 I’ll pass the preview url via sockets to frontend and then the user can access the url.

Resource:

Continue ReadingDeploying preview using surge in Yaydoc

Making SUSI.AI reach more users through messenger bots

SUSI.AI learns from the queries asked to it by the users. More are the number of queries asked, the better is the learning by SUSI.AI. More are the number of users involved with SUSI.AI, better is the amount of content available to SUSI to learn from. Now, the challenge in front of us is to indulge more users with SUSI.AI. In this blog post, SUSI Tweetbot and SUSI FBbot are used as examples to show how we increase our user base through messenger bots.

Twitter:

Twitter has a 328 million user base according to this article’s data. Integration of SUSI.AI to just Twitter makes it available to around 300 million users. Even if some percentage of these users start using SUSI.AI, increase in the user base of SUSI.AI could be exponential. Increasing the user base is advantageous as it provides with more training data for SUSI.AI to learn from.

Sharing by public tweet:

Integrating to it is just the first step towards increasing SUSI.AI’s user base. Next step is to reach the users and indulge them into chatting with SUSI.AI.

Suppose a user asked something to SUSI.AI and really liked the reply from it. He/she wants to share it with his/her followers. This can prove to be a golden opportunity for us, to increase the reach of SUSI.AI. This way we can indulge their friends to try SUSI.AI and have an amazing time with it.

It becomes clear that sharing messages is an indispensable feature and can help us a lot. Twitter doesn’t provide sharing with friends through direct messages but with a feature much better than it. We can share that message as a public tweet and cover more users including the followers of the user.

 

To show this button we use Call to action support by twitter:

"message_data": {
          "text": txt,
          "ctas": [{
                      "type": "web_url",
                      "label": "Share with your followers",
                      "url": ""
          }]
}

The url key in the above code must have a value that redirects to a U.I. that allows to publicly tweet this reply by SUSI.AI.

Using this “https://twitter.com/intent/tweet?text=” as the url value shows a new page with an empty tweet message, as the text query in the url has no value. We set the text field with a value such that we end up like this:

and after tweeting it:

Sending a direct message link with the tweet:

Twitter provides with a lot of features when sending direct messages. Shifting a user from tweets to direct messages is beneficial in a way that we can efficiently tell the user about the capabilities of SUSI.AI and show important links to him/her like of its repository, web chat client etc.

When a user tweets to the SUSI.AI page with a query, we reply with a tweet back to the user. Along with that, we provide a link to privately message SUSI.AI account if the user wants to.

This way if user ends up visiting SUSI.AI in a chat window:

To achieve this in SUSI Tweetbot, Twitter provides with a direct message url. This url – https://twitter.com/messages/compose?recipient_id= redirects us to the chat window of the account having that recipient id, passed as a query string. In our case the url turns out to be – https://twitter.com/messages/compose?recipient_id=871446601000202244 as “871446601000202244” is the recipient id of @SusiAI1 account on twitter.

If we send this url as a text in our “tweet back” to the user, Twitter beautifully shows it as a clickable button with the label as “send a private message” as shown above.

Hence we call the tweet function like this:

tweetIt('@' + from + ' ' + message + date +"\nhttps://twitter.com/messages/compose?recipient_id=871446601000202244");

Facebook:

As we all know Facebook is the giant of social networking sites. Integrating SUSI.AI to Facebook is beneficial for us.

Unlike Twitter, in Facebook we can share messages with other friends through direct messaging to them. The last topic of this blog post walks you through on adding sharing feature in SUSI FBbot:

We also take advantage from the SUSI FBbot to make SUSI.AI better. We can direct the users using SUSI messenger bots to SUSI.AI repository and show them all the required information on how to contribute to the project.

The best way to do this is by showing a “How to contribute” button when the user clicks on “Get started” in the messenger bot.

This blog post will help you showing buttons along with the replies (by SUSI.AI).

When the user clicks this button, we send two messages back to the user as shown:

This way through bots, we have somehow got the user to visit SUSI.AI repository and contribute to it or indulge him/her in the discussion, through our Gitter channel on what can be the next steps in improving SUSI.AI.

Resources:

  1. Link Ads to Messenger, Enhanced Mobile Websites, Payments and More by Seth Rosenberg from Facebook developers blog.
  2. Drive discovery of bots and other customer experiences in direct messages by  Travis Lull from Twitter blog.
Continue ReadingMaking SUSI.AI reach more users through messenger bots

Image Loading in Open Event Organizer Android App using Glide

Open Event Organizer is an Android App for the Event Organizers and Entry Managers. Open Event API Server acts as a backend for this App. The core feature of the App is to scan a QR code from the ticket to validate an attendee’s check in. Other features of the App are to display an overview of sales and ticket management. As per the functionality, the performance of the App is very important. The App should be functional even on a weak network. Talking about the performance, the image loading part in the app should be handled efficiently as it is not an essential part of the functionality of the App. Open Event Organizer uses Glide, a fast and efficient image loading library created by Sam Judd. I will be talking about its implementation in the App in this blog.

First part is the configuration of the glide in the App. The library provides a very easy way to do that. Your app needs to implement a class named AppGlideModule using annotations provided by the library and it generates a glide API which can be used in the app for all the image loading stuff. The AppGlideModule implementation in the Orga App looks like:

@GlideModule
public final class GlideAPI extends AppGlideModule {

   @Override
   public void registerComponents(Context context, Glide glide, Registry registry) {
       registry.replace(GlideUrl.class, InputStream.class, new OkHttpUrlLoader.Factory());
   }

   // TODO: Modify the options here according to the need
   @Override
   public void applyOptions(Context context, GlideBuilder builder) {
       int diskCacheSizeBytes = 1024 * 1024 * 10; // 10mb
       builder.setDiskCache(new InternalCacheDiskCacheFactory(context, diskCacheSizeBytes));
   }

   @Override
   public boolean isManifestParsingEnabled() {
       return false;
   }
}

 

This generates the API named GlideApp by default in the same package which can be used in the whole app. Just make sure to add the annotation @GlideModule to this implementation which is used to find this class in the app. The second part is using the generated API GlideApp in the app to load images using URLs. Orga App uses data binding for layouts. So all the image loading related code is placed at a single place in DataBinding class which is used by the layouts. The class has a method named setGlideImage which takes an image view, an image URL, a placeholder drawable and a transformation. The relevant code is:

private static void setGlideImage(ImageView imageView, String url, Drawable drawable, Transformation<Bitmap> transformation) {
       if (TextUtils.isEmpty(url)) {
           if (drawable != null)
               imageView.setImageDrawable(drawable);
           return;
       }
       GlideRequest<Drawable> request = GlideApp
           .with(imageView.getContext())
           .load(Uri.parse(url));

       if (drawable != null) {
           request
               .placeholder(drawable)
               .error(drawable);
       }
       request
           .centerCrop()
           .transition(withCrossFade())
           .transform(transformation == null ? new CenterCrop() : transformation)
           .into(imageView);
   }

 

The method is very clear. First, the URL is checked for nullability. If null, the drawable is set to the imageview and method returns. Usage of GlideApp is simpler. Pass the URL to the GlideApp using the method with which returns a GlideRequest which has operators to set other required options like transitions, transformations, placeholder etc. Lastly, pass the imageview using into operator. By default, Glide uses HttpURLConnection provided by android to load the image which can be changed to use Okhttp using the extension provided by the library. This is set in the AppGlideModule implementation in the registerComponents method.

Links:
1. Documentation for Glide, an Image Loading Library
2. Documentation for Okhttp, an HTTP client for Android and Java Applications

Continue ReadingImage Loading in Open Event Organizer Android App using Glide

Fascinating Experiments with PSLab

PSLab can be extensively used in a variety of experiments ranging from the traditional electrical and electronics experiments to a number of innovative experiments. The PSLab desktop app and the Android app have all the essential features that are needed to perform the experiments. In addition to that there is a large collection of built-in experiments in both these experiments.

This blog is an extension to the blog post mentioned here. This blog lists some of the basic electrical and electronics experiments which are based on the same principles which are mentioned in the previous blog. In addition to that, some interesting and innovative experiments where PSLab can be used are also listed here. The experiments mentioned here require some prerequisite knowledge of electronic elements and basic circuit building. (The links mentioned at the end of the blog will be helpful in this case)

Op-Amp as an Inverting and a Non-Inverting Amplifier

There are two methods of doing this experiment. PSLab already has a built-in experiment dedicated to inverting and non-inverting amplification of op-amps. In the Android App, just navigate to Saved Experiments -> Electronics Experiments -> Op-Amp Circuits -> Inverting/ Non-Inverting. In case of the Desktop app, select Electronics Experiments from the main drop-down at the top of the window and select the Inverting/Non-inverting op-amp experiment.

This experiment can also performed using the basic features of PSLab. The only advantage of this methodology is that it allows much more tweaking of values to observe the Op-Amp behaviour in greater detail. However, the built-in experiment is good enough for most of the cases.

  • Construct the above circuits on a breadboard.
  • For the amplifier, connect the terminals of CH1 and GND of PSLab on the input side i.e. next to Vi and the terminals of CH2 and GND on the output side i.e next to Vo.
  • Usually, an Op-Amp like LM741 have a set of pins, one dedicated for the inverting input and the other dedicated for the non-inverting input. It is recommended to consult the datasheet of the Op-Amp IC used in order to get the pin number with which the input has to be connected.
  • The terminals of W1 and GND are also connected on the input side and they are used to generate a sine wave.
  • The resistors displayed in the figure have the values R1 = 10k and R2 = 51k. Resistance values other than these can also be considered. The gain of the op-amp would depend on the ratio of R2/R1, so it is better to consider values of R2 which are significantly larger than R1 in order to see the gain properly.
  • Use the PSLab Desktop App and open the Waveform Generator in Control. Set the wave type of W1 to Sine and set the frequency at 1 kHz and magnitude to 0.1 V. Then go ahead and open the Oscilloscope.
  • CH1 would display the input waveform and CH2 will display the output waveform and the plots can be observed.
  • If the input is connected to the inverting pin of the op-amp, the output obtained will be amplified and will have a phase difference of 90o with the input waveform whereas when the non-inverting pin is selected, the output is just amplified and no such phase difference is observed.
  • Note: Take proper care while connecting the V+ and V- pins of the op-amp, else the op-amp will be damaged.

Diode as an Integrator and Differentiator

An integrator in measurement and control applications is an element whose output signal is the time integral of its input signal. It accumulates the input quantity over a defined time to produce a representative output.

Integration is an important part of many engineering and scientific applications. Mechanical integrators are the oldest application, and are still used in such as metering of water flow or electric power. Electronic analogue integrators are the basis of analog computers and charge amplifiers. Integration is also performed by digital computing algorithms.

In electronics, a differentiator is a circuit that is designed such that the output of the circuit is approximately directly proportional to the rate of change (the time derivative) of the input. An active differentiator includes some form of amplifier. A passive differentiator circuit is made of only resistors and capacitors.

  • Construct the above circuits on a breadboard.
  • For both the circuits, connect the terminals of CH1 and GND of PSLab on the input side i.e. next to input voltage source and the terminals of CH2 and GND on the output side i.e next to Vo.
  • Ensure that the inverting and the non-inverting terminals of the op-amp are connected correctly. Check for the +/- signs in the diagram. ‘+’ corresponds to non-inverting and ‘-’ corresponds to inverting.
  • The terminals of W1 and GND are also connected on the input side and they are used to generate a sine wave.
  • The resistors displayed in the figure have the values R1 = 10k and R2 = 51k. Resistance values other than these can also be considered. The gain of the op-amp would depend on the ratio of R2/R1, so it is better to consider values of R2 which are significantly larger than R1 in order to see the gain properly.
  • Use the PSLab Desktop App and open the Waveform Generator in Control. Set the wave type of W1 to Sine and set the frequency at 1 kHz and magnitude to 5V (10V peak to peak). Then go ahead and open the Oscilloscope.
  • CH1 would display the input waveform and CH2 will display the output waveform and the plots can be observed.
  • If all the connections are made properly and the values of the parameters are set properly, then the waveform obtained should be as shown below.

Performing experiments involving ICs (Digital circuits)

The experiments mentioned so far including the ones mentioned in the previous blog post involved analog circuits and so they required features like the arbitrary waveform generator. However, digital circuits work using discrete values only. PSLab has the features needed to perform digital experiments which mainly involve the use of a square wave generator with a variable duty cycle.

PSLab board has dedicated pins named SQR1, SQR2, SQR3 and SQR4. The options for configuring these pins is present under the Advanced Control section in the Desktop app and in the Android app Applications->Control->Advanced. The options include selecting the pins which we want to use for digital outputs and then configuring the frequency and duty cycle of the square wave generated from that particular pin.

Innovative Experiments using PSLab

PSLab has quite a good number of interesting built-in experiments. These experiments can be found in the dropdown list at the top in the Desktop App and under the Saved Experiments header in the Android App. The built-in experiments come bundled with good quality documentation having circuit diagrams and detailed procedure to perform the experiments.

Some of the interesting experiments include:

  • Lemon Cell Experiment: In this experiment, the internal resistance and the voltage supplied by the lemon cell are measured.

 

  • Sound Beats: In acoustics, a beat is an interference pattern between two sounds of slightly different frequencies, perceived as a periodic variation in volume whose rate is the difference of the two frequencies.

 

  • When tuning instruments that can produce sustained tones, beats can readily be recognized. Tuning two tones to a unison will present a peculiar effect: when the two tones are close in pitch but not identical, the difference in frequency generates the beating.
  • This experiment requires producing two waves together of different frequencies and connecting them to the same oscilloscope channel. The pattern observed is shown below.

References:

  1. The previous blog on experiments using PSLab – https://blog.fossasia.org/electronics-experiments-with-pslab/
  2. More about op-amps and their characteristics – http://www.electronics-tutorials.ws/opamp/opamp_1.html
  3. Read more about differential and integrator circuits – https://www.allaboutcircuits.com/textbook/semiconductors/chpt-8/differentiator-integrator-circuits/
  4. Experiments involving digital circuits for reference – http://web.iitd.ac.in/~shouri/eep201/experiments.php
Continue ReadingFascinating Experiments with PSLab

Adding Static Code Analyzers in Open Event Orga Android App

This week, in Open Event Orga App project (Github Repo), we wanted to add some static code analysers that run on each build to ensure that the app code is free of potential bugs and follows a certain style. Codacy handles a few of these things, but it is quirky and sometimes produces false positives. Furthermore, it is not a required check for builds so errors can creep in gradually. We chose checkstyle, PMD and Findbugs for static analysis as they are most popular for Java. The area they work on kind of overlaps but gives security regarding code quality. Findbugs actually analyses the bytecode instead of source code to find possible JVM bugs.

Adding dependencies

The first step was to add the required dependencies. We chose the library android-check as it contained all 3 libraries and was focused on Android and easily configurable. First, we add classpath in project level build.gradle

dependencies {
   classpath 'com.noveogroup.android:check:1.2.4'
}

 

Then, we apply the plugin in app level build.gradle

apply plugin: 'com.noveogroup.android.check'

 

This much is enough to get you started, but by default, the build will not fail if any violations are found. To change this behaviour, we add this block in app level build.gradle

check {
   abortOnError true
}

 

There are many configuration options available for the library. Do check out the project github repo using the link provided above

Configuration

The default configuration is of easy level, and will be enough for most projects, but it is of course configurable. So we took the default hard configs for 3 analysers and disabled properties which we did not need. The place you need to store the config files is the config folder in either root project directory or the app directory. The name of the config file should be checkstyle.xml, pmd.xml and findbugs.xml

These are the default settings and you can obviously configure them by following the instructions on the project repo

Checkstyle

For checkstyle, you can find the easy and hard configuration here

The basic principle is that if you need to add a check, you include a module like this:

<module name="NewlineAtEndOfFile" />

 

If you want to modify the default value of some property, you do it like this:

<module name="RegexpSingleline">
   <property name="format" value="\s+$" />
   <property name="minimum" value="0" />
   <property name="maximum" value="0" />
   <property name="message" value="Line has trailing spaces." />
   <property name="severity" value="info" />
</module>

 

And if you want to remove a check, you can ignore it like this:

<module name="EqualsHashCode">
   <property name="severity" value="ignore" />
</module>

 

It’s pretty straightforward and easy to configure.

Findbugs

For findbugs, you can find the easy and hard configuration here

Findbugs configuration exists in the form of filters where we list resources it should skip analyzing, like:

<Match>
   <Class name="~.*\.BuildConfig" />
</Match>

 

If we want to ignore a particular pattern, we can do so like this:

<!-- No need to force hashCode for simple models -->
<Match>
   <Bug pattern="HE_EQUALS_USE_HASHCODE " />
</Match>

 

Sometimes, you’d want to only ignore a pattern only for certain files or fields. Findbugs supports regex to match such items:

<!-- Don't complain about rules in tests. -->
<Match>
   <Field name="~.*mockitoRule"/>
   <Bug pattern="URF_UNREAD_PUBLIC_OR_PROTECTED_FIELD" />
</Match>

 

You can also annotate your code to suppress warning in the particular class, mehod or field rather than disabling it for the whole project. For that, you need to add findbugs annotations dependency in the project

compile 'com.google.code.findbugs:findbugs-annotations:3.0.1'

 

And then use it like this:

@SuppressFBWarnings(
   value = "ICAST_IDIV_CAST_TO_DOUBLE",
   justification = "We want granularity to be integer")
public void showChart(LineChart lineChart) {
   ...
}

 

It also allows setting the justification of suppressing the rule for clarity

PMD

For findbugs, you can find the easy and hard configuration here

Like checkstyle, you have to first add a rule set to tell PMD which checks to perform:

<rule ref="rulesets/java/android.xml" />

 

If you want to modify the default value of the rule, you can do it like this:

<rule ref="rulesets/java/codesize.xml/TooManyMethods">
   <properties>
       <property name="maxmethods" value="15" />
   </properties>
</rule>

 

Or if you want to entirely exclude a rule, you can do it like this:

<rule ref="rulesets/java/basic.xml">
   <exclude name="OverrideBothEqualsAndHashcode" />
</rule>

 

PMD also supports suppressing warnings in the code itself using annotations. You don’t require any external libraries for it as it supports the in built java.lang.SuppessWarnings annotations. You can use it like this:

@SuppressWarnings("PMD.AvoidInstantiatingObjectsInLoops") // Entries cannot be created outside loop
private LineDataSet setData(Map<String, Long> map, String label) throws ParseException {
   ...
}

 

As you can see, we need to prepend “PMD.” to the rule name so that there are no clashes while annotation processing. Remember to comment the reason for suppressing the warning so that your co-developers know and can remove it in future if criteria does not meet anymore.

There is a lot more to learn about these static analyzers, which you can read upon in their official documentation:

Continue ReadingAdding Static Code Analyzers in Open Event Orga Android App

Avoiding Nested Callbacks using RxJS in Loklak Scraper JS

Loklak Scraper JS, as suggested by the name, is a set of scrapers for social media websites written in NodeJS. One of the most common requirement while scraping is, there is a parent webpage which provides links for related child webpages. And the required data that needs to be scraped is present in both parent webpage and child webpages. For example, let’s say we want to scrape quora user profiles matching search query “Siddhant”. The matching profiles webpage for this example will be https://www.quora.com/search?q=Siddhant&type=profile which is the parent webpage, and the child webpages are links of each matched profiles.

Now, a simplistic approach is to first obtain the HTML of parent webpage and then synchronously fetch the HTML of child webpages and parse them to get the desired data. The problem with this approach is that, it is slower as it is synchronous.

A different approach can be using request-promise-native to implement the logic in asynchronous way. But, there are limitations with this approach. The HTML of child webpages that needs to be fetched can only be obtained after HTML of parent webpage is obtained and number of child webpages are dynamic. So, there is a request dependency between parent and child i.e. if only we have data from parent webpage we can extract data from child webpages. The code would look like this

request(parent_url)
   .then(data => {
       ...
       request(child_url)
           .then(data => {
               // again nesting of child urls
           })
           .catch(error => {

           });
   })
   .catch(error => {

   });

 

Firstly, with this approach there is callback hell. Horrible, isn’t it? And then we don’t know how many nested callbacks to use as the number of child webpages are dynamic.

The saviour: RxJS

The solution to our problem is reactive extensions in JavaScript. Using rxjs we can obtain the required data without callback hell and asynchronously!

The promise-request object of the parent webpage is obtained. Using this promise-request object an observable is generated by using Rx.Observable.fromPromise. flatmap operator is used to parse the HTML of the parent webpage and obtain the links of child webpages. Then map method is used transform the links to promise-request objects which are again transformed into observables. The returned value – HTML – from the resulting observables is parsed and accumulated using zip operator. Finally, the accumulated data is subscribed. This is implemented in getScrapedData method of Quora JS scraper.

getScrapedData(query, callback) {
   // observable from parent webpage
   Rx.Observable.fromPromise(this.getSearchQueryPromise(query))
     .flatMap((t, i) => { // t is html of parent webpage
       // request-promise object of child webpages
       let profileLinkPromises = this.getProfileLinkPromises(t);
       // request-promise object to observable transformation
       let obs = profileLinkPromises.map(elem => Rx.Observable.fromPromise(elem));

       // each Quora profile is parsed
       return Rx.Observable.zip( // accumulation of data from child webpages
         ...obs,
         (...profileLinkObservables) => {
           let scrapedProfiles = [];
           for (let i = 0; i < profileLinkObservables.length; i++) {
             let $ = cheerio.load(profileLinkObservables[i]);
             scrapedProfiles.push(this.scrape($));
           }
           return scrapedProfiles; // accumulated data returned
         }
       )
     })
     .subscribe( // desired data is subscribed
       scrapedData => callback({profiles: scrapedData}),
       error => callback(error)
     );
 }

 

Resources:

Continue ReadingAvoiding Nested Callbacks using RxJS in Loklak Scraper JS

Posting Tweet from Loklak Wok Android

Loklak Wok Android is a peer harvester that posts collected tweets to the Loklak Server. Not only it is a peer harvester, but also provides users to post their tweets from the app. Images and location of the user can also be attached in the tweet. This blog explains

Adding Dependencies to the project

In app/build.gradle:

apply plugin: 'com.android.application'
apply plugin: 'me.tatarka.retrolambda'

android {
   ...
   packagingOptions {
       exclude 'META-INF/rxjava.properties'
   }
}

dependencies {
   ...
   compile 'com.google.code.gson:gson:2.8.1'

   compile 'com.squareup.retrofit2:retrofit:2.3.0'
   compile 'com.squareup.retrofit2:converter-gson:2.3.0'
   compile 'com.squareup.retrofit2:adapter-rxjava2:2.3.0'

   compile 'io.reactivex.rxjava2:rxjava:2.0.5'
   compile 'io.reactivex.rxjava2:rxandroid:2.0.1'
}

 

In build.gradle project level:

dependencies {
   classpath 'com.android.tools.build:gradle:2.3.3'
   classpath 'me.tatarka:gradle-retrolambda:3.2.0'
}

 

Implementation

User first authorize the application, so that they are able to post tweet from the app. For posting tweet statuses/update API endpoint of twitter is used and for attaching images with tweet media/upload API endpoint is used.

As, photos and location can be attached in a tweet, for Android Marshmallow and above we need to ask runtime permissions for camera, gallery and location. The related permissions are mentioned in Manifest file first

<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
// for location
<uses-feature android:name="android.hardware.location.gps"/>
<uses-feature android:name="android.hardware.location.network"/>

 

If, the device is using an OS below Android Marshmallow, there will be no runtime permissions, the user will be asked permissions at the time of installing the app.

Now, runtime permissions are asked, if the user had already granted the permission the related activity (camera, gallery or location) is started.

For camera permissions, onClickCameraButton is called

@OnClick(R.id.camera)
public void onClickCameraButton() {
   int permission = ContextCompat.checkSelfPermission(
           getActivity(), Manifest.permission.CAMERA);
   if (isAndroidMarshmallowAndAbove && permission != PackageManager.PERMISSION_GRANTED) {
       String[] permissions = {
               Manifest.permission.CAMERA,
               Manifest.permission.WRITE_EXTERNAL_STORAGE,
               Manifest.permission.READ_EXTERNAL_STORAGE
       };
       requestPermissions(permissions, CAMERA_PERMISSION);
   } else {
       startCameraActivity();
   }
}

 

To start the camera activity if the permission is already granted, startCameraActivity method is called

private void startCameraActivity() {
   Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
   File dir = getActivity().getExternalFilesDir(Environment.DIRECTORY_PICTURES);
   mCapturedPhotoFile = new File(dir, createFileName());
   Uri capturedPhotoUri = getImageFileUri(mCapturedPhotoFile);
   intent.putExtra(MediaStore.EXTRA_OUTPUT, capturedPhotoUri);
   startActivityForResult(intent, REQUEST_CAPTURE_PHOTO);
}

 

If the user decides to save the photo clicked from camera activity, the photo should be saved by creating a file and its uri is required to display the saved photo. The filename is created using createFileName method

private String createFileName() {
   String timeStamp = new SimpleDateFormat("ddMMyyyy_HHmmss").format(new Date());
   return "JPEG_" + timeStamp + ".jpg";
}

 

and uri is obtained using getImageFileUri

private Uri getImageFileUri(File file) {
   if (Build.VERSION.SDK_INT < Build.VERSION_CODES.N) {
       return Uri.fromFile(file);
   } else {
       return FileProvider.getUriForFile(getActivity(), "org.loklak.android.provider", file);
   }
}

 

Similarly, for the gallery, onClickGalleryButton method is implemented to ask runtime permissions and launch gallery activity if the permission is already granted.

@OnClick(R.id.gallery)
public void onClickGalleryButton() {
   int permission = ContextCompat.checkSelfPermission(
           getActivity(), Manifest.permission.WRITE_EXTERNAL_STORAGE);
   if (isAndroidMarshmallowAndAbove && permission != PackageManager.PERMISSION_GRANTED) {
       String[] permissions = {
               Manifest.permission.WRITE_EXTERNAL_STORAGE,
               Manifest.permission.READ_EXTERNAL_STORAGE
       };
       requestPermissions(permissions, GALLERY_PERMISSION);
   } else {
       startGalleryActivity();
   }
}

 

For starting the gallery activity, startGalleryActivity is used

private void startGalleryActivity() {
   Intent intent = new Intent();
   intent.setType("image/*");
   intent.setAction(Intent.ACTION_GET_CONTENT);
   intent.putExtra(Intent.EXTRA_ALLOW_MULTIPLE, true);
   startActivityForResult(
           Intent.createChooser(intent, "Select images"), REQUEST_GALLERY_MEDIA_SELECTION);
}

 

And finally for location onClickAddLocationButton is implemented

@OnClick(R.id.location)
public void onClickAddLocationButton() {
   int permission = ContextCompat.checkSelfPermission(
           getActivity(), Manifest.permission.ACCESS_FINE_LOCATION);
   if (isAndroidMarshmallowAndAbove && permission != PackageManager.PERMISSION_GRANTED) {
       String[] permissions = {Manifest.permission.ACCESS_FINE_LOCATION};
       requestPermissions(permissions, LOCATION_PERMISSION);
   } else {
       getLatitudeLongitude();
   }
}

 

If, the permission is already granted getLatitudeLongitude is called. Using LocationManager last known location is tried to obtain, if there is no last known location, current location is requested using a LocationListener.

private void getLatitudeLongitude() {
   mLocationManager =
           (LocationManager) getActivity().getSystemService(Context.LOCATION_SERVICE);

   // last known location from network provider
   Location location = mLocationManager.getLastKnownLocation(LocationManager.NETWORK_PROVIDER);
   if (location == null) { // last known location from gps
       location = mLocationManager.getLastKnownLocation(LocationManager.GPS_PROVIDER);
   }

   if (location != null) { // last known loaction available
       mLatitude = location.getLatitude();
       mLongitude = location.getLongitude();
       setLocation();
   } else { // last known location not available
       mLocationListener = new TweetLocationListener();
       // current location requested
       mLocationManager.requestLocationUpdates("gps", 1000, 1000, mLocationListener);
   }
}

 

TweetLocationListener implements a LocationListener that provides the current location. If GPS is disabled, settings is launched so that user can enable GPS. This is implemented in onProviderDisabled callback of the listener.

private class TweetLocationListener implements LocationListener {

   @Override
   public void onLocationChanged(Location location) {
       mLatitude = location.getLatitude();
       mLongitude = location.getLongitude();
       setLocation();
   }

   @Override
   public void onStatusChanged(String s, int i, Bundle bundle) {

   }

   @Override
   public void onProviderEnabled(String s) {

   }

   @Override
   public void onProviderDisabled(String s) {
       Intent intent = new Intent(Settings.ACTION_LOCATION_SOURCE_SETTINGS);
       startActivity(intent);
   }
}

 

If the user is asked for permissions, onRequestPermissionResult callback is invoked, if the permission is granted then the respective activities are opened or latitude and longitude are obtained.

@Override
public void onRequestPermissionsResult(
       int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
   boolean isResultGranted = grantResults[0] == PackageManager.PERMISSION_GRANTED;
   switch (requestCode) {
       case CAMERA_PERMISSION:
           if (grantResults.length > 0 && isResultGranted) {
               startCameraActivity();
           }
           break;
       case GALLERY_PERMISSION:
           if (grantResults.length > 0 && isResultGranted) {
               startGalleryActivity();
           }
           break;
       case LOCATION_PERMISSION:
           if (grantResults.length > 0 && isResultGranted) {
               getLatitudeLongitude();
           }
   }
}

 

Since, the camera and gallery activities are started to obtain a result i.e. photo(s). So, onActivityResult callback is called

@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
   switch (requestCode) {
       case REQUEST_CAPTURE_PHOTO:
           if (resultCode == Activity.RESULT_OK) {
               onSuccessfulCameraActivityResult();
           }
           break;
       case REQUEST_GALLERY_MEDIA_SELECTION:
           if (resultCode == Activity.RESULT_OK) {
               onSuccessfulGalleryActivityResult(data);
           }
           break;
       default:
           super.onActivityResult(requestCode, resultCode, data);
   }
}

 

If the result of Camera activity is success i.e. the image is saved by the user. The saved image is displayed in a RecyclerView in TweetPostingFragment. This is implemented in onSuccessfulCameraActivityResult mehtod

private void onSuccessfulCameraActivityResult() {
   tweetMultimediaContainer.setVisibility(View.VISIBLE);
   Bitmap bitmap = BitmapFactory.decodeFile(mCapturedPhotoFile.getAbsolutePath());
   mTweetMediaAdapter.clearAdapter();
   mTweetMediaAdapter.addBitmap(bitmap);
}

 

For a gallery activity, if a single image is selected then the uri of image can be obtained using getData method of an Intent. If multiple images are selected, the uri of images are stored in ClipData. After uris of images are obtained, it is checked if more than 4 images are selected as Twitter allows at most 4 images in a tweet. If more than 4 images are selected than the uris of extra images are removed. Using the uris of the images, the file is obtained and then from file Bitmap is obtained which is displayed in RecyclerView. This is implemented in onSuccessfulGalleryActivityResult

private void onSuccessfulGalleryActivityResult(Intent intent) {
   tweetMultimediaContainer.setVisibility(View.VISIBLE);
   Context context = getActivity();

   // get uris of selected images
   ClipData clipData = intent.getClipData();
   List<Uri> uris = new ArrayList<>();
   if (clipData != null) {
       for (int i = 0; i < clipData.getItemCount(); i++) {
           ClipData.Item item = clipData.getItemAt(i);
           uris.add(item.getUri());
       }
   } else {
       uris.add(intent.getData());
   }

   // remove of more than 4 images
   int numberOfSelectedImages = uris.size();
   if (numberOfSelectedImages > 4) {
       while (numberOfSelectedImages-- > 4) {
           uris.remove(numberOfSelectedImages);
       }
       Utility.displayToast(mToast, context, moreImagesMessage);
   }

   // get bitmap from uris of images
   List<Bitmap> bitmaps = new ArrayList<>();
   for (Uri uri : uris) {
       String filePath = FileUtils.getPath(context, uri);
       Bitmap bitmap = BitmapFactory.decodeFile(filePath);
       bitmaps.add(bitmap);
   }

   // display images in RecyclerView
   mTweetMediaAdapter.setBitmapList(bitmaps);
}

 

Now, to post images with tweet, first the ID of the image needs to be obtained using media/upload API endpoint, a multipart post request and then the obtained ID(s) is passed as the value of “media_ids” in statuses/update API endpoint. Since, there can be more than one image, a single observable is created for each image. The bitmap is converted to raw bytes for the multipart post request. As the process includes a network request and converting bitmap to bytes – a heavy resource consuming task which shouldn’t be on the main thread -, so an observable is created for the same as a result of which the tasks are performed concurrently i.e. in a separate thread.

private Observable<String> getImageId(Bitmap bitmap) {
   return Observable
           .defer(() -> {
               // convert bitmap to bytes
               ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
               bitmap.compress(Bitmap.CompressFormat.JPEG, 100, byteArrayOutputStream);
               byte[] bytes = byteArrayOutputStream.toByteArray();
               RequestBody mediaBinary = RequestBody.create(MultipartBody.FORM, bytes);
               return Observable.just(mediaBinary);
           })
           .flatMap(mediaBinary -> mTwitterMediaApi.getMediaId(mediaBinary, null))
           .flatMap(mediaUpload -> Observable.just(mediaUpload.getMediaIdString()))
           .subscribeOn(Schedulers.newThread());
}

 

The tweet is posted when the “Tweet” button is clicked by invoking onClickTweetPostButton mehtod

@OnClick(R.id.tweet_post_button)
public void onClickTweetPostButton() {
   String status = tweetPostEditText.getText().toString();

   List<Bitmap> bitmaps = mTweetMediaAdapter.getBitmapList();
   List<Observable<String>> mediaIdObservables = new ArrayList<>();
   for (Bitmap bitmap : bitmaps) { // observables for images is created
       mediaIdObservables.add(getImageId(bitmap));
   }

   if (mediaIdObservables.size() > 0) {
       // Post tweet with image
       postImageAndTextTweet(mediaIdObservables, status);
   } else if (status.length() > 0) {
       // Post text only tweet
       postTextOnlyTweet(status);
   } else {
       Utility.displayToast(mToast, getActivity(), tweetEmptyMessage);
   }
}

 

Tweet containing images are posted by calling postImageAndTextTweet, once the tweet data is obtained, the data is cross posted to loklak server. The image IDs are obtained concurrently by using the zip operator.

private void postImageAndTextTweet(List<Observable<String>> imageIdObservables, String status) {
   mProgressDialog.show();
   ConnectableObservable<StatusUpdate> observable = Observable.zip(
           imageIdObservables,
           mediaIdArray -> {
               String mediaIds = "";
               for (Object mediaId : mediaIdArray) {
                   mediaIds = mediaIds + String.valueOf(mediaId) + ",";
               }
               return mediaIds.substring(0, mediaIds.length() - 1);
           })
           .flatMap(imageIds -> mTwitterApi.postTweet(status, imageIds, mLatitude, mLongitude))
           .subscribeOn(Schedulers.io())
           .publish();

   Disposable postingDisposable = observable
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread())
           .subscribe(this::onSuccessfulTweetPosting, this::onErrorTweetPosting);
   mCompositeDisposable.add(postingDisposable);

   // cross posting to loklak server   
   Disposable crossPostingDisposable = observable
           .flatMap(this::pushTweetToLoklak)
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread())
           .subscribe(
                   push -> {},
                   t -> Log.e(LOG_TAG, "Cross posting failed: " + t.toString())
           );
   mCompositeDisposable.add(crossPostingDisposable);

   Disposable publishDisposable = observable.connect();
   mCompositeDisposable.add(publishDisposable);
}

 

In case of only text tweets, the text is obtained from editText and mediaIds are passed as null. And once the tweet data is obtained it is cross posted to loklak_server. This is executed by calling postTextOnlyTweet

private void postTextOnlyTweet(String status) {
   mProgressDialog.show();
   ConnectableObservable<StatusUpdate> observable =
           mTwitterApi.postTweet(status, null, mLatitude, mLongitude)
           .subscribeOn(Schedulers.io())
           .publish();

   Disposable postingDisposable = observable
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread())
           .subscribe(this::onSuccessfulTweetPosting, this::onErrorTweetPosting);
   mCompositeDisposable.add(postingDisposable);


   // cross posting to loklak server
   Disposable crossPostingDisposable = observable
           .flatMap(this::pushTweetToLoklak)
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread())
           .subscribe(
                   push -> Log.e(LOG_TAG, push.getStatus()),
                   t -> Log.e(LOG_TAG, "Cross posting failed: " + t.toString())
           );
   mCompositeDisposable.add(crossPostingDisposable);

   Disposable publishDisposable = observable.connect();
   mCompositeDisposable.add(publishDisposable);
}

 

Resources

Continue ReadingPosting Tweet from Loklak Wok Android