Getting skills by an author in SUSI.AI Skill CMS

The skill description page of any skill in SUSI.AI skill cms displays all the details regarding the skill. It displays image, description, examples and name of author. The skill made by author can impress the users and they might want to know more skills made by that particular author.

We decided to display all the skills by an author. We needed an endpoint from server to get skills by author. This cannot be done on client side as that would result in multiple ajax calls to server for each skill of user. The endpoint used is :

"http://api.susi.ai/cms/getSkillsByAuthor.json?author=" + author

Here the author is the name of the author who published the particular skill. We make an ajax call to the server with the endpoint mentioned above and this is done when the user clicks the author. The ajax call response is as follows(example) :

{
 0:       "/home/susi/susi_skill_data/models/general/Entertainment/en/creator_info.txt",
 1: "/home/susi/susi_skill_data/models/general/Entertainment/en/flip_coin.txt",
 2: "/home/susi/susi_skill_data/models/general/Assistants/en/websearch.txt",
session: {
identity: {
type: "host",
name: "139.5.254.154",
anonymous: true
  }
 }
}

The response contains the list of skills made by author. We parse this response to get the required information. We decided to display a table containing name, category and language of the skill. We used map function on object keys to parse information from every key present in JSON response. Every value corresponding to a key represents a response of following type:

"/home/susi/susi_skill_data/models/general/Category/language/name.txt"

Explanation:

  • Category: There are currently six categories Assistants, Entertainment, Knowledge, Problem Solving, Shopping and Small Talks. Each skill falls under a different category.
  • language: This represents the ISO language code of the language in which skill is written.
  • name: This is the name of the skill.

We want these attributes from the string so we have used the split function:

let parse = data[skill].split('/');

data is JSON response and skill is the key corresponding to which we are parsing information. We store the array returned by split function in variable parse. Now we return the following table in map function:

return (
            <TableRow>
               <TableRowColumn>
                   <div>
                      <Img
                         style={imageStyle}
                         src={[
                              image1,
                              image2
                         ]}
                         unloader={<CircleImage name={name} size="40"/>}
                       />
                       {name}
                    </div>
                </TableRowColumn>
                <TableRowColumn>{parse[6]}</TableRowColumn>
                <TableRowColumn>{isoConv(parse[7])}</TableRowColumn>
             </TableRow>
          )

Here :

    • name: The name of skill converted into Title case by the following code :
let name = parse[8].split('.')[0];
name = name.charAt(0).toUpperCase() + name.slice(1);
  • parse[6]: This represents the category of the skill.
  • isoConv(parse[7]): parse[7] is the ISO code of the language of skill and isoConv is an npm package used to get full form of the language from ISO code.
  • CircleImage: This is a fallback option in case image at the URL is not found. This takes first two words from the name and makes a circular component.

After successful execution of the code, we have the following looking table:

Resources:

Continue ReadingGetting skills by an author in SUSI.AI Skill CMS

Implementation of React Routers in SUSI Web Chat

When we were developing the SUSI Web Chat application we wanted to implement set of static pages with the chat application. In the start we just wanted to navigate  through different static pages and move back to the web chat application. But it takes time to load a new page when user clicks on a link. Our goal was therefore to minimize the loading time by using lazy loading. For that we used react-route .It is standard library for react js.

From the docs:

“React Router keeps your UI synced with the URL. It has a simple API with powerful features like lazy code loading, dynamic route matching, and location transition handling built right in. Make the URL your first thought, not an after-thought.” (https://www.npmjs.com/package/react-router-plus)

We need react-route to be installed in our application first. We can install it using NPM by running this command on project folder.

npm install --save react-router-dom

Next we have to set up our routes. We have two types of headers in our application. One is chat application header, second one is static page header. In static page header we have a navigation to switch between static pages.
First we need to choose the router type because there are two types of routers in react. “” and “” we can use “” in our example because our server can handle dynamic requests. If we are requesting data from static page we should use “” .
We used that in “” and made another new component called “” and used it on “index.js” like this.

import { BrowserRouter as Router } from 'react-router-dom';
Import App from .App;
 ReactDOM.render(
  	<IntlProvider locale={defaultPrefLanguage}>
		<Router> <App /> </Router>
	</IntlProvider>,
  	document.getElementById('root')  );

In “App.js” we can set up routes like this.

        <Switch>
            <Route exact path='/' component={ChatApp}/>
            <Route exact path="/overview" component={Overview} />
            <Route exact path='/blog' component={Blog} />
            <Route exact path="/logout" component={Logout} />
            <Route exact path="/settings" component={Settings} />
            <Route exact path="*" component={NotFound} />
        </Switch>

We use elements to render component when they match with the corresponding path. We use “path” to add router location that we need to match width the component. We use “exact” property to render the component if location exactly matches with the “path”. If we do not use “exact” property it renders when we have child routes after the path like “/blog/1 “ .
We used “” element to group routes.
We can’t use anchor () tags to navigate pages without reloading. We have to use tags instead of that. We have to replace all the places we have used

<a href= ‘URL’>lable name </a>

with this,

<Link to=’URL’>Lable name</Link>   

After doing above changes application will perform faster and it will load all page contents soon after you click the navigation links.

If you would like to join with FOSSASIA and contribute to SUSI Web Chat Application please fork this repository on Github.

Resources

Continue ReadingImplementation of React Routers in SUSI Web Chat

Adding Static Code Analyzers in Open Event Orga Android App

This week, in Open Event Orga App project (Github Repo), we wanted to add some static code analysers that run on each build to ensure that the app code is free of potential bugs and follows a certain style. Codacy handles a few of these things, but it is quirky and sometimes produces false positives. Furthermore, it is not a required check for builds so errors can creep in gradually. We chose checkstyle, PMD and Findbugs for static analysis as they are most popular for Java. The area they work on kind of overlaps but gives security regarding code quality. Findbugs actually analyses the bytecode instead of source code to find possible JVM bugs.

Adding dependencies

The first step was to add the required dependencies. We chose the library android-check as it contained all 3 libraries and was focused on Android and easily configurable. First, we add classpath in project level build.gradle

dependencies {
   classpath 'com.noveogroup.android:check:1.2.4'
}

 

Then, we apply the plugin in app level build.gradle

apply plugin: 'com.noveogroup.android.check'

 

This much is enough to get you started, but by default, the build will not fail if any violations are found. To change this behaviour, we add this block in app level build.gradle

check {
   abortOnError true
}

 

There are many configuration options available for the library. Do check out the project github repo using the link provided above

Configuration

The default configuration is of easy level, and will be enough for most projects, but it is of course configurable. So we took the default hard configs for 3 analysers and disabled properties which we did not need. The place you need to store the config files is the config folder in either root project directory or the app directory. The name of the config file should be checkstyle.xml, pmd.xml and findbugs.xml

These are the default settings and you can obviously configure them by following the instructions on the project repo

Checkstyle

For checkstyle, you can find the easy and hard configuration here

The basic principle is that if you need to add a check, you include a module like this:

<module name="NewlineAtEndOfFile" />

 

If you want to modify the default value of some property, you do it like this:

<module name="RegexpSingleline">
   <property name="format" value="\s+$" />
   <property name="minimum" value="0" />
   <property name="maximum" value="0" />
   <property name="message" value="Line has trailing spaces." />
   <property name="severity" value="info" />
</module>

 

And if you want to remove a check, you can ignore it like this:

<module name="EqualsHashCode">
   <property name="severity" value="ignore" />
</module>

 

It’s pretty straightforward and easy to configure.

Findbugs

For findbugs, you can find the easy and hard configuration here

Findbugs configuration exists in the form of filters where we list resources it should skip analyzing, like:

<Match>
   <Class name="~.*\.BuildConfig" />
</Match>

 

If we want to ignore a particular pattern, we can do so like this:

<!-- No need to force hashCode for simple models -->
<Match>
   <Bug pattern="HE_EQUALS_USE_HASHCODE " />
</Match>

 

Sometimes, you’d want to only ignore a pattern only for certain files or fields. Findbugs supports regex to match such items:

<!-- Don't complain about rules in tests. -->
<Match>
   <Field name="~.*mockitoRule"/>
   <Bug pattern="URF_UNREAD_PUBLIC_OR_PROTECTED_FIELD" />
</Match>

 

You can also annotate your code to suppress warning in the particular class, mehod or field rather than disabling it for the whole project. For that, you need to add findbugs annotations dependency in the project

compile 'com.google.code.findbugs:findbugs-annotations:3.0.1'

 

And then use it like this:

@SuppressFBWarnings(
   value = "ICAST_IDIV_CAST_TO_DOUBLE",
   justification = "We want granularity to be integer")
public void showChart(LineChart lineChart) {
   ...
}

 

It also allows setting the justification of suppressing the rule for clarity

PMD

For findbugs, you can find the easy and hard configuration here

Like checkstyle, you have to first add a rule set to tell PMD which checks to perform:

<rule ref="rulesets/java/android.xml" />

 

If you want to modify the default value of the rule, you can do it like this:

<rule ref="rulesets/java/codesize.xml/TooManyMethods">
   <properties>
       <property name="maxmethods" value="15" />
   </properties>
</rule>

 

Or if you want to entirely exclude a rule, you can do it like this:

<rule ref="rulesets/java/basic.xml">
   <exclude name="OverrideBothEqualsAndHashcode" />
</rule>

 

PMD also supports suppressing warnings in the code itself using annotations. You don’t require any external libraries for it as it supports the in built java.lang.SuppessWarnings annotations. You can use it like this:

@SuppressWarnings("PMD.AvoidInstantiatingObjectsInLoops") // Entries cannot be created outside loop
private LineDataSet setData(Map<String, Long> map, String label) throws ParseException {
   ...
}

 

As you can see, we need to prepend “PMD.” to the rule name so that there are no clashes while annotation processing. Remember to comment the reason for suppressing the warning so that your co-developers know and can remove it in future if criteria does not meet anymore.

There is a lot more to learn about these static analyzers, which you can read upon in their official documentation:

Continue ReadingAdding Static Code Analyzers in Open Event Orga Android App

How to Get Secure Webhook for SUSI Bots in Kubernetes Deployment

Webhook is a user-defined callback which gets triggered by any events in code like receiving a message from a user in SUSI bot is an event. Few bots need webhook URI for callback like in SUSI Viber bot we need to define a webhook URI in the code to receive callbacks and make our Viber bot work. In this blog, we will learn how can we get an SSL activated webhook while deploying our bot to Google container using Kubernetes. We will generate SSL certificate using kube lego service that is included in kubernetes and you will define that in yaml files below. We can also generate SSL certificate using third party services like CloudFlare but by using it we will be dependant on CloudFlare so we will use kube lego.

We will start off by registering a domain first on which we will activate SSL certificate and use that domain as a webhook. Go to freenom and register your account. After logging in, register a free domain of any name and check out that order. Next, you have to set IP for DNS of this domain. To do so we will reserve an IP address in our Google cloud project with this command:

gcloud compute addresses create IPname --region us-central1

You will get a created message. To see your IP go to VPC Network -> External IP addresses. Add this IP to DNS zone of your domain and save it for later use in yaml files that we will use for deployment. Now we will deploy our bot using yaml files but before deployment, we will create a cluster

gcloud container clusters create clusterName

After creating cluster add these yaml files to your bot repository and add your IP address that you have saved above to the yamls/nginx/service.yaml file for “loadBalancerIP” parameter. Replace domain name in yamls/application/ingress-notls.yaml and yamls/application/ingress-tls.yaml with your domain name that you have registered already. Add your email ID to yamls/lego/configmap.yaml for “lego.email” parameter. Replace “image” and “env” parameters in yamls/application/deployment.yaml with your docker image and your environment variables that you are using in your code. After changing yaml files we will use this deploy script to create a deployment. Change paths for yaml files in script according to your yaml files path.

In gcloud shell run the following command to deploy an application using given configurations.

bash ./path-to-deploy-script/deploy.sh create all

This will create the deployment as we have defined in the script. The Kubernetes master creates the load balancer and related Compute Engine forwarding rules, target pools, and firewall rules to make the service fully accessible from outside of Google Cloud Platform. Wait for a few minutes for all the containers to be created and the SSL Certificates to be generated and loaded.

You have successfully created a secure webhook. Test it by opening the domain that you have registered at the start.

Resources

Enabling SSL using CloudFlare: https://jonnyjordan.com/blog/how-to-setup-cloudflare-flexible-ssl-for-wordpress/
https://www.youtube.com/watch?v=qFvwEVkl5gk

Continue ReadingHow to Get Secure Webhook for SUSI Bots in Kubernetes Deployment

Avoiding Nested Callbacks using RxJS in Loklak Scraper JS

Loklak Scraper JS, as suggested by the name, is a set of scrapers for social media websites written in NodeJS. One of the most common requirement while scraping is, there is a parent webpage which provides links for related child webpages. And the required data that needs to be scraped is present in both parent webpage and child webpages. For example, let’s say we want to scrape quora user profiles matching search query “Siddhant”. The matching profiles webpage for this example will be https://www.quora.com/search?q=Siddhant&type=profile which is the parent webpage, and the child webpages are links of each matched profiles.

Now, a simplistic approach is to first obtain the HTML of parent webpage and then synchronously fetch the HTML of child webpages and parse them to get the desired data. The problem with this approach is that, it is slower as it is synchronous.

A different approach can be using request-promise-native to implement the logic in asynchronous way. But, there are limitations with this approach. The HTML of child webpages that needs to be fetched can only be obtained after HTML of parent webpage is obtained and number of child webpages are dynamic. So, there is a request dependency between parent and child i.e. if only we have data from parent webpage we can extract data from child webpages. The code would look like this

request(parent_url)
   .then(data => {
       ...
       request(child_url)
           .then(data => {
               // again nesting of child urls
           })
           .catch(error => {

           });
   })
   .catch(error => {

   });

 

Firstly, with this approach there is callback hell. Horrible, isn’t it? And then we don’t know how many nested callbacks to use as the number of child webpages are dynamic.

The saviour: RxJS

The solution to our problem is reactive extensions in JavaScript. Using rxjs we can obtain the required data without callback hell and asynchronously!

The promise-request object of the parent webpage is obtained. Using this promise-request object an observable is generated by using Rx.Observable.fromPromise. flatmap operator is used to parse the HTML of the parent webpage and obtain the links of child webpages. Then map method is used transform the links to promise-request objects which are again transformed into observables. The returned value – HTML – from the resulting observables is parsed and accumulated using zip operator. Finally, the accumulated data is subscribed. This is implemented in getScrapedData method of Quora JS scraper.

getScrapedData(query, callback) {
   // observable from parent webpage
   Rx.Observable.fromPromise(this.getSearchQueryPromise(query))
     .flatMap((t, i) => { // t is html of parent webpage
       // request-promise object of child webpages
       let profileLinkPromises = this.getProfileLinkPromises(t);
       // request-promise object to observable transformation
       let obs = profileLinkPromises.map(elem => Rx.Observable.fromPromise(elem));

       // each Quora profile is parsed
       return Rx.Observable.zip( // accumulation of data from child webpages
         ...obs,
         (...profileLinkObservables) => {
           let scrapedProfiles = [];
           for (let i = 0; i < profileLinkObservables.length; i++) {
             let $ = cheerio.load(profileLinkObservables[i]);
             scrapedProfiles.push(this.scrape($));
           }
           return scrapedProfiles; // accumulated data returned
         }
       )
     })
     .subscribe( // desired data is subscribed
       scrapedData => callback({profiles: scrapedData}),
       error => callback(error)
     );
 }

 

Resources:

Continue ReadingAvoiding Nested Callbacks using RxJS in Loklak Scraper JS

Posting Tweet from Loklak Wok Android

Loklak Wok Android is a peer harvester that posts collected tweets to the Loklak Server. Not only it is a peer harvester, but also provides users to post their tweets from the app. Images and location of the user can also be attached in the tweet. This blog explains

Adding Dependencies to the project

In app/build.gradle:

apply plugin: 'com.android.application'
apply plugin: 'me.tatarka.retrolambda'

android {
   ...
   packagingOptions {
       exclude 'META-INF/rxjava.properties'
   }
}

dependencies {
   ...
   compile 'com.google.code.gson:gson:2.8.1'

   compile 'com.squareup.retrofit2:retrofit:2.3.0'
   compile 'com.squareup.retrofit2:converter-gson:2.3.0'
   compile 'com.squareup.retrofit2:adapter-rxjava2:2.3.0'

   compile 'io.reactivex.rxjava2:rxjava:2.0.5'
   compile 'io.reactivex.rxjava2:rxandroid:2.0.1'
}

 

In build.gradle project level:

dependencies {
   classpath 'com.android.tools.build:gradle:2.3.3'
   classpath 'me.tatarka:gradle-retrolambda:3.2.0'
}

 

Implementation

User first authorize the application, so that they are able to post tweet from the app. For posting tweet statuses/update API endpoint of twitter is used and for attaching images with tweet media/upload API endpoint is used.

As, photos and location can be attached in a tweet, for Android Marshmallow and above we need to ask runtime permissions for camera, gallery and location. The related permissions are mentioned in Manifest file first

<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
// for location
<uses-feature android:name="android.hardware.location.gps"/>
<uses-feature android:name="android.hardware.location.network"/>

 

If, the device is using an OS below Android Marshmallow, there will be no runtime permissions, the user will be asked permissions at the time of installing the app.

Now, runtime permissions are asked, if the user had already granted the permission the related activity (camera, gallery or location) is started.

For camera permissions, onClickCameraButton is called

@OnClick(R.id.camera)
public void onClickCameraButton() {
   int permission = ContextCompat.checkSelfPermission(
           getActivity(), Manifest.permission.CAMERA);
   if (isAndroidMarshmallowAndAbove && permission != PackageManager.PERMISSION_GRANTED) {
       String[] permissions = {
               Manifest.permission.CAMERA,
               Manifest.permission.WRITE_EXTERNAL_STORAGE,
               Manifest.permission.READ_EXTERNAL_STORAGE
       };
       requestPermissions(permissions, CAMERA_PERMISSION);
   } else {
       startCameraActivity();
   }
}

 

To start the camera activity if the permission is already granted, startCameraActivity method is called

private void startCameraActivity() {
   Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
   File dir = getActivity().getExternalFilesDir(Environment.DIRECTORY_PICTURES);
   mCapturedPhotoFile = new File(dir, createFileName());
   Uri capturedPhotoUri = getImageFileUri(mCapturedPhotoFile);
   intent.putExtra(MediaStore.EXTRA_OUTPUT, capturedPhotoUri);
   startActivityForResult(intent, REQUEST_CAPTURE_PHOTO);
}

 

If the user decides to save the photo clicked from camera activity, the photo should be saved by creating a file and its uri is required to display the saved photo. The filename is created using createFileName method

private String createFileName() {
   String timeStamp = new SimpleDateFormat("ddMMyyyy_HHmmss").format(new Date());
   return "JPEG_" + timeStamp + ".jpg";
}

 

and uri is obtained using getImageFileUri

private Uri getImageFileUri(File file) {
   if (Build.VERSION.SDK_INT < Build.VERSION_CODES.N) {
       return Uri.fromFile(file);
   } else {
       return FileProvider.getUriForFile(getActivity(), "org.loklak.android.provider", file);
   }
}

 

Similarly, for the gallery, onClickGalleryButton method is implemented to ask runtime permissions and launch gallery activity if the permission is already granted.

@OnClick(R.id.gallery)
public void onClickGalleryButton() {
   int permission = ContextCompat.checkSelfPermission(
           getActivity(), Manifest.permission.WRITE_EXTERNAL_STORAGE);
   if (isAndroidMarshmallowAndAbove && permission != PackageManager.PERMISSION_GRANTED) {
       String[] permissions = {
               Manifest.permission.WRITE_EXTERNAL_STORAGE,
               Manifest.permission.READ_EXTERNAL_STORAGE
       };
       requestPermissions(permissions, GALLERY_PERMISSION);
   } else {
       startGalleryActivity();
   }
}

 

For starting the gallery activity, startGalleryActivity is used

private void startGalleryActivity() {
   Intent intent = new Intent();
   intent.setType("image/*");
   intent.setAction(Intent.ACTION_GET_CONTENT);
   intent.putExtra(Intent.EXTRA_ALLOW_MULTIPLE, true);
   startActivityForResult(
           Intent.createChooser(intent, "Select images"), REQUEST_GALLERY_MEDIA_SELECTION);
}

 

And finally for location onClickAddLocationButton is implemented

@OnClick(R.id.location)
public void onClickAddLocationButton() {
   int permission = ContextCompat.checkSelfPermission(
           getActivity(), Manifest.permission.ACCESS_FINE_LOCATION);
   if (isAndroidMarshmallowAndAbove && permission != PackageManager.PERMISSION_GRANTED) {
       String[] permissions = {Manifest.permission.ACCESS_FINE_LOCATION};
       requestPermissions(permissions, LOCATION_PERMISSION);
   } else {
       getLatitudeLongitude();
   }
}

 

If, the permission is already granted getLatitudeLongitude is called. Using LocationManager last known location is tried to obtain, if there is no last known location, current location is requested using a LocationListener.

private void getLatitudeLongitude() {
   mLocationManager =
           (LocationManager) getActivity().getSystemService(Context.LOCATION_SERVICE);

   // last known location from network provider
   Location location = mLocationManager.getLastKnownLocation(LocationManager.NETWORK_PROVIDER);
   if (location == null) { // last known location from gps
       location = mLocationManager.getLastKnownLocation(LocationManager.GPS_PROVIDER);
   }

   if (location != null) { // last known loaction available
       mLatitude = location.getLatitude();
       mLongitude = location.getLongitude();
       setLocation();
   } else { // last known location not available
       mLocationListener = new TweetLocationListener();
       // current location requested
       mLocationManager.requestLocationUpdates("gps", 1000, 1000, mLocationListener);
   }
}

 

TweetLocationListener implements a LocationListener that provides the current location. If GPS is disabled, settings is launched so that user can enable GPS. This is implemented in onProviderDisabled callback of the listener.

private class TweetLocationListener implements LocationListener {

   @Override
   public void onLocationChanged(Location location) {
       mLatitude = location.getLatitude();
       mLongitude = location.getLongitude();
       setLocation();
   }

   @Override
   public void onStatusChanged(String s, int i, Bundle bundle) {

   }

   @Override
   public void onProviderEnabled(String s) {

   }

   @Override
   public void onProviderDisabled(String s) {
       Intent intent = new Intent(Settings.ACTION_LOCATION_SOURCE_SETTINGS);
       startActivity(intent);
   }
}

 

If the user is asked for permissions, onRequestPermissionResult callback is invoked, if the permission is granted then the respective activities are opened or latitude and longitude are obtained.

@Override
public void onRequestPermissionsResult(
       int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
   boolean isResultGranted = grantResults[0] == PackageManager.PERMISSION_GRANTED;
   switch (requestCode) {
       case CAMERA_PERMISSION:
           if (grantResults.length > 0 && isResultGranted) {
               startCameraActivity();
           }
           break;
       case GALLERY_PERMISSION:
           if (grantResults.length > 0 && isResultGranted) {
               startGalleryActivity();
           }
           break;
       case LOCATION_PERMISSION:
           if (grantResults.length > 0 && isResultGranted) {
               getLatitudeLongitude();
           }
   }
}

 

Since, the camera and gallery activities are started to obtain a result i.e. photo(s). So, onActivityResult callback is called

@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
   switch (requestCode) {
       case REQUEST_CAPTURE_PHOTO:
           if (resultCode == Activity.RESULT_OK) {
               onSuccessfulCameraActivityResult();
           }
           break;
       case REQUEST_GALLERY_MEDIA_SELECTION:
           if (resultCode == Activity.RESULT_OK) {
               onSuccessfulGalleryActivityResult(data);
           }
           break;
       default:
           super.onActivityResult(requestCode, resultCode, data);
   }
}

 

If the result of Camera activity is success i.e. the image is saved by the user. The saved image is displayed in a RecyclerView in TweetPostingFragment. This is implemented in onSuccessfulCameraActivityResult mehtod

private void onSuccessfulCameraActivityResult() {
   tweetMultimediaContainer.setVisibility(View.VISIBLE);
   Bitmap bitmap = BitmapFactory.decodeFile(mCapturedPhotoFile.getAbsolutePath());
   mTweetMediaAdapter.clearAdapter();
   mTweetMediaAdapter.addBitmap(bitmap);
}

 

For a gallery activity, if a single image is selected then the uri of image can be obtained using getData method of an Intent. If multiple images are selected, the uri of images are stored in ClipData. After uris of images are obtained, it is checked if more than 4 images are selected as Twitter allows at most 4 images in a tweet. If more than 4 images are selected than the uris of extra images are removed. Using the uris of the images, the file is obtained and then from file Bitmap is obtained which is displayed in RecyclerView. This is implemented in onSuccessfulGalleryActivityResult

private void onSuccessfulGalleryActivityResult(Intent intent) {
   tweetMultimediaContainer.setVisibility(View.VISIBLE);
   Context context = getActivity();

   // get uris of selected images
   ClipData clipData = intent.getClipData();
   List<Uri> uris = new ArrayList<>();
   if (clipData != null) {
       for (int i = 0; i < clipData.getItemCount(); i++) {
           ClipData.Item item = clipData.getItemAt(i);
           uris.add(item.getUri());
       }
   } else {
       uris.add(intent.getData());
   }

   // remove of more than 4 images
   int numberOfSelectedImages = uris.size();
   if (numberOfSelectedImages > 4) {
       while (numberOfSelectedImages-- > 4) {
           uris.remove(numberOfSelectedImages);
       }
       Utility.displayToast(mToast, context, moreImagesMessage);
   }

   // get bitmap from uris of images
   List<Bitmap> bitmaps = new ArrayList<>();
   for (Uri uri : uris) {
       String filePath = FileUtils.getPath(context, uri);
       Bitmap bitmap = BitmapFactory.decodeFile(filePath);
       bitmaps.add(bitmap);
   }

   // display images in RecyclerView
   mTweetMediaAdapter.setBitmapList(bitmaps);
}

 

Now, to post images with tweet, first the ID of the image needs to be obtained using media/upload API endpoint, a multipart post request and then the obtained ID(s) is passed as the value of “media_ids” in statuses/update API endpoint. Since, there can be more than one image, a single observable is created for each image. The bitmap is converted to raw bytes for the multipart post request. As the process includes a network request and converting bitmap to bytes – a heavy resource consuming task which shouldn’t be on the main thread -, so an observable is created for the same as a result of which the tasks are performed concurrently i.e. in a separate thread.

private Observable<String> getImageId(Bitmap bitmap) {
   return Observable
           .defer(() -> {
               // convert bitmap to bytes
               ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
               bitmap.compress(Bitmap.CompressFormat.JPEG, 100, byteArrayOutputStream);
               byte[] bytes = byteArrayOutputStream.toByteArray();
               RequestBody mediaBinary = RequestBody.create(MultipartBody.FORM, bytes);
               return Observable.just(mediaBinary);
           })
           .flatMap(mediaBinary -> mTwitterMediaApi.getMediaId(mediaBinary, null))
           .flatMap(mediaUpload -> Observable.just(mediaUpload.getMediaIdString()))
           .subscribeOn(Schedulers.newThread());
}

 

The tweet is posted when the “Tweet” button is clicked by invoking onClickTweetPostButton mehtod

@OnClick(R.id.tweet_post_button)
public void onClickTweetPostButton() {
   String status = tweetPostEditText.getText().toString();

   List<Bitmap> bitmaps = mTweetMediaAdapter.getBitmapList();
   List<Observable<String>> mediaIdObservables = new ArrayList<>();
   for (Bitmap bitmap : bitmaps) { // observables for images is created
       mediaIdObservables.add(getImageId(bitmap));
   }

   if (mediaIdObservables.size() > 0) {
       // Post tweet with image
       postImageAndTextTweet(mediaIdObservables, status);
   } else if (status.length() > 0) {
       // Post text only tweet
       postTextOnlyTweet(status);
   } else {
       Utility.displayToast(mToast, getActivity(), tweetEmptyMessage);
   }
}

 

Tweet containing images are posted by calling postImageAndTextTweet, once the tweet data is obtained, the data is cross posted to loklak server. The image IDs are obtained concurrently by using the zip operator.

private void postImageAndTextTweet(List<Observable<String>> imageIdObservables, String status) {
   mProgressDialog.show();
   ConnectableObservable<StatusUpdate> observable = Observable.zip(
           imageIdObservables,
           mediaIdArray -> {
               String mediaIds = "";
               for (Object mediaId : mediaIdArray) {
                   mediaIds = mediaIds + String.valueOf(mediaId) + ",";
               }
               return mediaIds.substring(0, mediaIds.length() - 1);
           })
           .flatMap(imageIds -> mTwitterApi.postTweet(status, imageIds, mLatitude, mLongitude))
           .subscribeOn(Schedulers.io())
           .publish();

   Disposable postingDisposable = observable
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread())
           .subscribe(this::onSuccessfulTweetPosting, this::onErrorTweetPosting);
   mCompositeDisposable.add(postingDisposable);

   // cross posting to loklak server   
   Disposable crossPostingDisposable = observable
           .flatMap(this::pushTweetToLoklak)
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread())
           .subscribe(
                   push -> {},
                   t -> Log.e(LOG_TAG, "Cross posting failed: " + t.toString())
           );
   mCompositeDisposable.add(crossPostingDisposable);

   Disposable publishDisposable = observable.connect();
   mCompositeDisposable.add(publishDisposable);
}

 

In case of only text tweets, the text is obtained from editText and mediaIds are passed as null. And once the tweet data is obtained it is cross posted to loklak_server. This is executed by calling postTextOnlyTweet

private void postTextOnlyTweet(String status) {
   mProgressDialog.show();
   ConnectableObservable<StatusUpdate> observable =
           mTwitterApi.postTweet(status, null, mLatitude, mLongitude)
           .subscribeOn(Schedulers.io())
           .publish();

   Disposable postingDisposable = observable
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread())
           .subscribe(this::onSuccessfulTweetPosting, this::onErrorTweetPosting);
   mCompositeDisposable.add(postingDisposable);


   // cross posting to loklak server
   Disposable crossPostingDisposable = observable
           .flatMap(this::pushTweetToLoklak)
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread())
           .subscribe(
                   push -> Log.e(LOG_TAG, push.getStatus()),
                   t -> Log.e(LOG_TAG, "Cross posting failed: " + t.toString())
           );
   mCompositeDisposable.add(crossPostingDisposable);

   Disposable publishDisposable = observable.connect();
   mCompositeDisposable.add(publishDisposable);
}

 

Resources

Continue ReadingPosting Tweet from Loklak Wok Android

Writing Browser Specific CSS for Susper in Angular

In Susper, we were facing a unique problem for Information box and Analytics box alignment.
At a width of around 1290 px, the boxes fit perfectly in Firefox as shown:

However, they were slipping to the next line in Google Chrome browsers for the same dimension(1290 px)

The solution to this issue was to write browser-specific CSS.
The two most commonly used browser specific tags are

  1. @-moz-document url-prefix() { }: This tag is used to target the Mozilla Firefox browser in particular. Anything written within the curly braces will not apply to any other browser.
  2. @media screen and (-webkit-min-device-pixel-ratio:0) { }: This tag is used to target all browsers that support webkit such as Chrome, Safari etc.

For our problem, we need to use @media screen and (-webkit-min-device-pixel-ratio:0) { }
This was how the code was written for both the components (Information box and Analytics box). Please refer to infobox.component.css and statsbox.component.css for the entire code.

@media screen and (-webkit-min-device-pixel-ratio:0) {
@media screen and (max-width: 1300px) {
.card {
width: 366px;
}
}
}
@media screen and (max-width: 1280px) {
.card {
width: 366px;
}

As a result of this snippet of code, we see the following effects:

  • In Chrome, the boxes change to a smaller width at 1300px itself, thus preventing it from slipping to the next line
  • In Firefox, the boxes change to a smaller width only at 1280px, and not at 1300px, thus achieving the exact design we envisioned.

This is how the display finally looks in Chrome:

References:

  1. Stack overflow on specific CSS tags for Chrome: https://stackoverflow.com/questions/9328832/how-to-apply-specific-css-rules-to-chrome-only
  2. Stack overflow on specific CSS tags for Firefox: https://stackoverflow.com/questions/952861/targeting-only-firefox-with-css
Continue ReadingWriting Browser Specific CSS for Susper in Angular

Implemeting Permissions for Speakers API in Open Event API Server

In my previous blogpost I talked about what the permissions enlisted in developer handbook means and which part of the codebase defines what part of the permissions clauses. The permission manager provides the permissions framework to implement the permissions and proper access controls based on the dev handbook.

In this blogpost, the actual implementation of the permissions is described. (Speakers API is under consideration here). The following table is the permissions in the developer handbook.

List

View

Create

Update

Delete

Superadmin/admin

Event organizer

✓ [1]

✓ [1]

✓ [1]

✓ [1]

✓ [1]

Registered User

✓ [3]

✓ [3]

✓ [4]

✓ [3]

✓ [3]

Everyone else

✓ [2][4]

✓ [2][4]

  1. Only self-owned events
  2. Only of sessions with state approved or accepted
  3. Only of self-submitted sessions
  4. Only to events with state published.

Super admin and admin should be able to access all the methods – list, view, create, update and delete. All the permissions are implemented through functions derived from permissions manager.Since all the functions have first check for super admin and admin, these are automatically taken care of.

Only of self-submitted sessions
This means that a registered user can list, view, edit or delete speakers of a session which he himself submitted. This requires adding a ‘creator’ attribute to session object which will help us determine if the session was created by the user. So before making a post for sessions, the current user identity is included as part of the payload.

def before_post(self, args, kwargs, data):
   data['creator_id'] = current_identity.id


Now that we have added creator id to a session, a method is used to check if session was created by the same user.

def is_session_self_submitted(view, view_args, view_kwargs, *args, **kwargs):
    user = current_identity


Firstly the current identity is set as user which will later be used to check id. Sequentially, admin, superadmin, organizer and co-organizers are checked. After this a session is fetched using 
kwargs[session_id]. Then if the current user id is same as the creator id of the session fetched, access is granted, else Forbidden Error is returned.

if session.creator_id == user.id:
   return view(*view_args, **view_kwargs)


In the before_post method of speakers class, the session ids received in the data are passed to this function in 
kwargs as session_id. The permissions are then checked there using current user. If the session id are not those of self submitted sessions, ‘Session Not Found’ is returned.

 if not has_access('is_session_self_submitted', session_id=session_id):
                    raise ObjectNotFound({'parameter': 'session_id'},
                                         "Session: {} not found".format(session_id))


Only of sessions with state approved or accepted
This check is required for user who has not submitted the session himself, so he can only see speaker profiles of accepted sessions. First, if the user is not authenticated, permissions are not checked. If co-organizer access is available, then the user can see all the speakers, so for this case filtering is not done. If not, then ‘is_session_self_submitted’ is checked. If yes, then then again no filtering, but if not then the following query filters accepted sessions.

if not has_access('is_session_self_submitted', session_id=session.id):
    query_ = query_.filter(Session.state == "approved" or Session.state == "accepted")

Similarly all the permissions first generate a list of all objects and then filtering is done based on the access level, instead of getting the list based on permissions.

Only to events with state published
It is necessary that users except the organizers and co-organizers can not see the events which are in draft state. The same thing follows for speaker profiles – a user cannot submit or view a speaker profile to an unpublished event. Hence, this constraint. So before POST of speakers, if event is not published, an event not found error is returned.

if event.state == "draft":
    raise ObjectNotFound({'parameter': 'event_id'},
                        "Event: {} not found".format(data['event_id'])


For GET, the  implementation of this is similar to the previous permission. A basic query is generated as such:

query_ = query_.join(Event).filter(Event.id == event.id)


Now if the user does not have at least 
co-organizer access, draft events must be filtered out.

if not has_access('is_coorganizer', event_id=event.id):
    query_ = query_.filter(Event.state == "published")


Some of the finer details have been skipped here, which can be found in the 
code.

Resources

Continue ReadingImplemeting Permissions for Speakers API in Open Event API Server

Understanding Permissions for Various APIs in Open Event API Server

Since the Open Event Server has various elements, a proper permissions system is essential. This huge list of permissions is well compiled in the developer handbook which can be found here. In this blogpost, permissions listed in the developer handbook are discussed. Let’s start with what we wish to achieve, that is, how to make sense of these permissions and where does each clause fit in the API Server’s codebase.

For example, Sponsors API has the following permissions.

List

View

Create

Update

Delete

Superadmin/admin

Event organizer

✓ [1]

✓ [1]

✓ [1]

✓ [1]

✓ [1]

Registered User

✓ [3]

✓ [3]

✓ [4]

✓ [3]

✓ [3]

Everyone else

✓ [2][4]

✓ [2][4]

  1. Only self-owned events
  2. Only sessions with state approved or accepted
  3. Only self-submitted sessions
  4. Only to events with state published.

Based on flask-rest-jsonapi resource manager, we get list create under ResourceList through ResourceList’s GET and POST methods, whereas View, Update, Delete work on single objects and hence are provided by ResourceDetail’s GET, PATCH and DELETE respectively. Each function of the permission manager has a jwt_required decorator.

@jwt_required
def is_super_admin(view, view_args, view_kwargs, *args, **kwargs):

@jwt_required
def is_session_self_submitted(view, view_args, view_kwargs, *args, **kwargs):


This
 ensures that whenever a check for access control is made to the permission manager, the user is signed in to Open Event. Additionally, the permissions are written in a hierarchical way such that for every permission, first the useris checked for admin or super admin, then for other accesses. Similar hierarchy is kept for organizer accesses like track organizer, registrar, staff or organizer and coorganizer.

Some APIs resources require no authentication for List. To do this we need to add a check for Authentication token in the headers. Since each of the functions of permission manager have jwt_required as decorator, it is important to checkfor the presence of JWT token in request headers, because we can proceed to check for specific permissions in that case only.

if 'Authorizationin request.headers:
 _jwt_required(current_app.config['JWT_DEFAULT_REALM'])


Since the resources are created by endpoints of the form : 
‘/v1/<resource>/` , this is derived from the separate ResourceListPost class. This class is POST only and has a before_create object method where the required relationships and permissions are checked before inserting the data in the tables. In the before_create method, let’s say that event is a required relationship, which will be defined by the ResourceRelationRequired , then we use our custom method

def require_relationship(resource_list, data):
    for resource in resource_list:
        if resource not in data:
            raise UnprocessableEntity({'pointer': '/data/relationships/{}'.format(resource)},
                                      "A valid relationship with {} resource is required".format(resource))


to check if the required relationships are present in the data. The event_id here can also be used to check for organizer or co-organizer access in the permissions manager for a particular event.

Here’s another permissions structure for a different API – Settings.

List

View

Create

Update

Delete

Superadmin/admin

Everyone else

✓ [1]

  1. Only app_nametaglineanalytics_keystripe_publishable_keygoogle_urlgithub_urltwitter_urlsupport_urlfacebook_urlyoutube_urlandroid_app_urlweb_app_url fields .

This API does not allow access to the complete object, but to only some fields which are listed above. The complete details can be checked here.

Resources

Continue ReadingUnderstanding Permissions for Various APIs in Open Event API Server

Optimising Docker Images for loklak Server

The loklak server is in a process of moving to Kubernetes. In order to do so, we needed to have different Docker images that suit these deployments. In this blog post, I will be discussing the process through which I optimised the size of Docker image for these deployments.

Initial Image

The image that I started with used Ubuntu as base. It installed all the components needed and then modified the configurations as required –

FROM ubuntu:latest

# Env Vars
ENV LANG=en_US.UTF-8
ENV JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF8
ENV DEBIAN_FRONTEND noninteractive

WORKDIR /loklak_server

RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y git openjdk-8-jdk
RUN git clone https://github.com/loklak/loklak_server.git /loklak_server
RUN git checkout development
RUN ./gradlew build -x test -x checkstyleTest -x checkstyleMain -x jacocoTestReport
RUN sed -i.bak 's/^\(port.http=\).*/\180/' conf/config.properties
... # More configurations
RUN echo "while true; do sleep 10;done" >> bin/start.sh

# Start
CMD ["bin/start.sh", "-Idn"]

The size of images built using this Dockerfile was quite huge –

REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE

loklak_server       latest              a92f506b360d        About a minute ago   1.114 GB

ubuntu              latest              ccc7a11d65b1        3 days ago           120.1 MB

But since this size is not acceptable, we needed to reduce it.

Moving to Apline

Alpine Linux is an extremely lightweight Linux distro, built mainly for the container environment. Its size is so tiny that it hardly puts any impact on the overall size of images. So, I replaced Ubuntu with Alpine –

FROM alpine:latest

...
RUN apk update
RUN apk add git openjdk8 bash
...

And now we had much smaller images –

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

loklak_server       latest              54b507ee9187        17 seconds ago      668.8 MB

alpine              latest              7328f6f8b418        6 weeks ago         3.966 MB

As we can see that due to no caching and small size of Alpine, the image size is reduced to almost half the original.

Reducing Content Size

There are many things in a project which are no longer needed while running the project, like the .git folder (which is huge in case of loklak) –

$ du -sh loklak_server/.git
236M loklak_server/.git

We can remove such files from the Docker image and save a lot of space –

rm -rf .[^.] .??*

Optimizing Number of Layers

The number of layers also affect the size of the image. More the number of layers, more will be the size of image. In the Dockerfile, we can club together the RUN commands for lower number of images.

RUN apk update && apk add openjdk8 git bash && \
  git clone https://github.com/loklak/loklak_server.git /loklak_server && \
  ...

After this, the effective size is again reduced by a major factor –

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

loklak_server       latest              54b507ee9187        17 seconds ago      422.3 MB

alpine              latest              7328f6f8b418        6 weeks ago         3.966 MB

Conclusion

In this blog post, I discussed the process of optimising the size of Dockerfile for Kubernetes deployments of loklak server. The size was reduced to 426 MB from 1.234 GB and this provided much faster push/pull time for Docker images, and therefore, faster updates for Kubernetes deployments.

Resources

Continue ReadingOptimising Docker Images for loklak Server