Creating System Images UI in Open Event Frontend

In Open Event Frontend, under the ‘admin/content’ route, ‘system-images’ route is present in which a user can update the image of the event topic he has uploaded at the time of creating an event. We achieved this as follows:

First, we create a route called ‘system/images’.

ember g route admin/content/system-images

This will generate three files:
1) routes/admin/content/system-images.js (route)
2) templates/admin/content/system-images.hbs (template)
3) test/unit/routes/admin/content/system-images-test.js (test file)
We also create a subroute of system-images route so as to render the subtopics queried through API.

ember g route admin/content/system-images/list

This will generate three files:
1) routes/admin/content/system-images/list.js(subroute)
2) templates/admin/content/system-images/list.hbs(template)
3) test/unit/routes/admin/content/system-imageslist-test.js(test file)

From our ‘system-images’ route, we render the ‘system-images’ template. We have a subroute of system-images route called as ‘list’ in which we render the subtopics available to us via API. The left side menu is the content of ‘system-images.hbs’ and the content on the right is it’s subroute i.e ‘list.hbs’. The ‘list’ subroute provides a facility to upload the system image. The API returns an array of objects containing subtopics as follows(single object is shown here, there will be multiple in the array)

{
            id          : 4545,
            name        : 'avatar',
            placeholder : {
              originalImageUrl : 'https://placeimg.com/360/360/any',
              copyright        : 'All rights reserved',
              origin           : 'Google Images'
            }
          },

Following is the content of our uploader i.e ‘list.hbs’ which is a subroute of the system-images.hbs.

<div class="ui segment">
  {{#each model as |subTopic|}}
    <h4>{{subTopic.name}}</h4>
    <img src="{{subTopic.placeholder.originalImageUrl}}" class="ui fluid image" alt={{subTopic.name}}>
    <div class="ui hidden divider"></div>
    <button class="ui button primary" {{action 'openModal' subTopic}} id="changebutton">{{t 'Change'}}</button>
  {{/each}}
</div>
{{modals/change-image-modal isOpen=isModalOpen subTopic=selectedSubTopic}}

We can see from the above template that we are iterating the response(subtopics) from the API. For now, we are just using the mock server response since we don’t have API ready for it. There is one ‘upload’ button which opens up the ‘change-image-modal’ to upload the image which looks as follows:

The ‘change-image-modal.hbs’ has a content as follows:

<div class="sixteen wide column">
        {{widgets/forms/image-upload
          needsCropper=true
          label=(t 'Update Image')
          id='user_image'
          aspectRatio=(if (eq subTopic.name 'avatar') (array 1 1))
          icon='photo'
          hint=(t 'Select Image')
          maxSizeInKb=10000
          helpText=(t 'For Cover Photos : 300x150px (2:1 ratio) image.
                    For Avatar Photos : 150x150px (1:1 ratio) image.')}}

        <form class="ui form">
          <div class="field">
            <label class="ui label">{{t 'Copyright information'}}</label>
            <div class="ui input">
              {{input type="text"}}
            </div>
          </div>
          <div class="field">
            <label class="ui label">{{t 'Origin information'}}</label>
            <div class="ui input">
              {{input type="text"}}
            </div>
          </div>
        </form>

      </div>

The above uploader has a custom ‘image-upload’ widget which we are using throughout the Open Event Frontend. Also, there are two input fields i.e ‘copyright’ and ‘origin’ information of the image. On clicking the ‘Select Image’ button and after selecting our image from the file input, we get a cropper for the image to be uploaded. The image can be cropped there according to the aspect ration maintained for it. The cropper looks like:

Thus, a user can update the image of the Event Topic that he created.

Resources:

Ember JS Official guide.

Mastering modals in Ember JS by Ember Guru.

Source codehttps://github.com/fossasia/open-event-frontend

 

Implementing Tweet Search Suggestions in Loklak Wok Android

Loklak Wok Android not only is a peer harvester for Loklak Server but it also provides users to search tweets using Loklak’s API endpoints. To provide a better search tweet search experience to the users, the app provides search suggestions using suggest API endpoint. The blog describes how “Search Suggestions” is implemented.

Third Party Libraries used to Implement Suggestion Feature

  • Retrofit2: Used for sending network request
  • Gson: Used for serialization, JSON to POJOs (Plain old java objects).
  • RxJava and RxAndroid: Used to implement a clean asynchronous workflow.
  • Retrolambda: Provides support for lambdas in Android.

These libraries can be installed by adding the following dependencies in app/build.gradle

android {
   .
   // removes rxjava file repetations
   packagingOptions {
      exclude 'META-INF/rxjava.properties'
   }
}

dependencies {
   // gson and retrofit2
    compile 'com.google.code.gson:gson:2.8.1'
    compile 'com.squareup.retrofit2:retrofit:2.3.0'
    compile 'com.squareup.retrofit2:converter-gson:2.3.0'
    compile 'com.squareup.retrofit2:adapter-rxjava2:2.3.0'

   // rxjava and rxandroid
    compile 'io.reactivex.rxjava2:rxjava:2.0.5'
    compile 'io.reactivex.rxjava2:rxandroid:2.0.1'
    compile 'com.jakewharton.rxbinding2:rxbinding:2.0.0'
}

 

To add retrolambda

// in project's build.gradle
dependencies {
    
    classpath 'me.tatarka:gradle-retrolambda:3.2.0'
}

// in app level build.gradle at the top
apply plugin: 'me.tatarka.retrolambda'

 

Fetching Suggestions

Retrofit2 sends a GET request to search API endpoint, the JSON response returned is serialized to Java Objects using the models defined in models.suggest package. The models can be easily generated using JSONSchema2Pojo. The benefit of using Gson is that, the hard work of parsing JSON is easily handled by it. The static method createRestClient creates the retrofit instance to be used for network calls

private static void createRestClient() {
   sRetrofit = new Retrofit.Builder()
           .baseUrl(BASE_URL) // base url : https://api.loklak.org/api/
           .addConverterFactory(GsonConverterFactory.create(gson))
           .addCallAdapterFactory(RxJava2CallAdapterFactory.create())
           .build();
}

 

The suggest endpoint is defined in LoklakApi interface

public interface LoklakApi {

   @GET("/api/suggest.json")
   Observable<SuggestData> getSuggestions(@Query("q") String query);

   @GET("/api/suggest.json")
   Observable<SuggestData> getSuggestions(@Query("q") String query, @Query("count") int count);

   .
}

 

Now, the suggestions are obtained using fetchSuggestion method. First, it creates the rest client to send network requests using createApi method (which internally calls creteRestClient implemented above). The suggestion query is obtained from the EditText. Then the RxJava Observable is subscribed in a separate thread which is specially meant for doing IO operations and finally the obtained data is observed i.e. views are inflated in the MainUI thread.

private void fetchSuggestion() {
   LoklakApi loklakApi = RestClient.createApi(LoklakApi.class); // rest client created
   String query = tweetSearchEditText.getText().toString(); // suggestion query from EditText
   Observable<SuggestData> suggestionObservable = loklakApi.getSuggestions(query); // observable created
   Disposable disposable = suggestionObservable
           .subscribeOn(Schedulers.io()) // subscribed on IO thread
           .observeOn(AndroidSchedulers.mainThread()) // observed on MainUI thread
           .subscribe(this::onSuccessfulRequest, this::onFailedRequest); // views are manipulated accordingly
   mCompositeDisposable.add(disposable);
}

 

If the network request is successful onSuccessfulRequest method is called which updates the data in the RecyclerView.

private void onSuccessfulRequest(SuggestData suggestData) {
   if (suggestData != null) {
       mSuggestAdapter.setQueries(suggestData.getQueries()); // data updated.
   }
   setAfterRefreshingState();
}

 

If the network request fails then onFailedRequest is called which displays a toast saying “Cannot fetch suggestions, Try Again!”. If requests are sent simultaneously and they fail, the previous message i.e. the previous toast is removed.

private void onFailedRequest(Throwable throwable) {
   Log.e(LOG_TAG, throwable.toString());
   if (mToast != null) { // checks if a previous toast is present
       mToast.cancel(); // removes the previous toast.
   }
   setAfterRefreshingState();
   // value of networkRequestError: "Cannot fetch suggestions, Try Again!"
   mToast = Toast.makeText(getActivity(), networkRequestError, Toast.LENGTH_SHORT); // toast is crated
   mToast.show(); // toast is displayed
}

 

Lively Updating suggestions

One way to update suggestions as the user types in, is to send a GET request with a query parameter to suggest API endpoint and check if a previous request is incomplete cancel it. This includes a lot of IO work and seems unnecessary because we would be sending request even if the user hasn’t completed typing what he/she wants to search. One probable solution is to use a Timer.

private TextWatcher searchTextWatcher = new TextWatcher() {  
    @Override
    public void afterTextChanged(Editable arg0) {
        // user typed: start the timer
        timer = new Timer();
        timer.schedule(new TimerTask() {
            @Override
            public void run() {
                String query = editText.getText()
                // send network request to get suggestions
            }
        }, 600); // 600ms delay before the timer executes the „run" method from TimerTask
    }

    @Override
    public void beforeTextChanged(CharSequence s, int start, int count, int after) {}

    @Override
    public void onTextChanged(CharSequence s, int start, int before, int count) {
        // user is typing: reset already started timer (if existing)
        if (timer != null) {
            timer.cancel();
        }
    }
};

 

This one looks good and eventually gets the job done. But, don’t you think for a simple stuff like this we are writing too much of code that is scary at the first sight.

Let’s try a second approach by using RxBinding for Android views and RxJava operators. RxBinding simply converts every event on the view to an Observable which can be subscribed and worked upon.

In place of TextWatcher here we use RxTextView.textChanges method that takes an EditText as a parameter. textChanges creates Observable of character sequences for text changes on view, here EditText. Then debounce operator of RxJava is used which drops observables till the timeout expires, a clean alternative for Timer. The second approach is implemented in updateSuggestions.

private void updateSuggestions() {
   Disposable disposable = RxTextView.textChanges(tweetSearchEditText) // generating observables for text changes
           .debounce(400, TimeUnit.MILLISECONDS) // droping observables i.e. text changes within 4 ms
           .subscribe(charSequence -> {
               if (charSequence.length() > 0) {
                   fetchSuggestion(); // suggestions obtained
               }
           });
   mCompositeDisposable.add(disposable);
}

 

CompositeDisposable is a bucket that contains Disposable objects, which are returned each time an Observable is subscribed to. So, all the disposable objects are collected in CompositeDisposable and unsubscribed when onStop of the fragment is called to avoid memory leaks.

A SwipeRefreshLayout is used, so that user can retry if the request fails or refresh to get new suggestions. When refreshing is not complete a circular ProgressDialog is shown and the RecyclerView showing old suggestions is made invisible, executed by setBeforeRefreshingState method

private void setBeforeRefreshingState() {
   refreshSuggestions.setRefreshing(true);
   tweetSearchSuggestions.setVisibility(View.INVISIBLE);
}

 

Similarly, once refreshing is done ProgessDialog is stopped and the visibility of RecyclerView which now contains the updated suggestions is changed to VISIBLE. This is executed in setAfterRefreshingState method

private void setAfterRefreshingState() {
   refreshSuggestions.setRefreshing(false);
   tweetSearchSuggestions.setVisibility(View.VISIBLE);
}

 

References:

Testing Errors and Exceptions Using Unittest in Open Event Server

Like all other helper functions in FOSSASIA‘s Open Event Server, we also need to test the exception and error helper functions and classes. The error helper classes are mainly used to create error handler responses for known errors. For example we know error 403 is Access Forbidden, but we want to send a proper source message along with a proper error message to help identify and handle the error, hence we use the error classes. To ensure that future commits do not mismatch the error, we implemented the unit tests for errors.

There are mainly two kind of error classes, one are HTTP status errors and the other are the exceptions. Depending on the type of error we get in the try-except block for a particular API, we raise that particular exception or error.

Unit Test for Exception

Exceptions are written in this form:

@validates_schema
    def validate_quantity(self, data):
        if 'max_order' in data and 'min_order' in data:
            if data['max_order'] < data['min_order']:
                raise UnprocessableEntity({'pointer': '/data/attributes/max-order'},
                                          "max-order should be greater than min-order")

 

This error is raised wherever the data that is sent as POST or PATCH is unprocessable. For example, this is how we raise this error:

raise UnprocessableEntity({'pointer': '/data/attributes/min-quantity'},

           "min-quantity should be less than max-quantity")

This exception is raised due to error in validation of data where maximum quantity should be more than minimum quantity.

To test that the above line indeed raises an exception of UnprocessableEntity with status 422, we use the assertRaises() function. Following is the code:

 def test_exceptions(self):
        # Unprocessable Entity Exception
        with self.assertRaises(UnprocessableEntity):
            raise UnprocessableEntity({'pointer': '/data/attributes/min-quantity'},
                                      "min-quantity should be less than max-quantity")


In the above code,
with self.assertRaises() creates a context of exception type, so that when the next line raises an exception, it asserts that the exception that it was expecting is same as the exception raised and hence ensures that the correct exception is being raised

Unit Test for Error

In error helper classes, what we do is, for known HTTP status codes we return a response that is user readable and understandable. So this is how we raise an error:

ForbiddenError({'source': ''}, 'Super admin access is required')

This is basically the 403: Access Denied error. But with the “Super admin access is required” message it becomes far more clear. However we need to ensure that status code returned when this error message is shown still stays 403 and isn’t modified in future unwantedly.

Here, errors and exceptions work a little different. When we declare a custom error class, we don’t really raise that error. Instead we show that error as a response. So we can’t use the assertRaises() function. However what we can do is we can compare the status code and ensure that the error raised is the same as the expected one. So we do this:

def test_errors(self):
        with app.test_request_context():
            # Forbidden Error
            forbidden_error = ForbiddenError({'source': ''}, 'Super admin access is required')
            self.assertEqual(forbidden_error.status, 403)

            # Not Found Error
            not_found_error = NotFoundError({'source': ''}, 'Object not found.')
            self.assertEqual(not_found_error.status, 404)


Here we firstly create an object of the error class
ForbiddenError with a sample source and message. We then assert that the status attribute of this object is 403 which ensures that this error is of the Access Denied type using the assertEqual() function, which is what was expected.
The above helps us maintain that no one in future unknowingly or by mistake changes the error messages and status code so as to maintain the HTTP status codes in the response.


Resources:

Implementing JSON API for ‘settings/contact-info’ route in Open Event Frontend

In Open Event Frontend, under the settings route, there is a ‘contact-info’ route which allows the user to change his info (email and contact). Previously to achieve this we were using the mock response from the server. But since we have the JSON API now we could integrate and use the JSON API for it so as to let the user modify his/her email and contact info. In the following section, I will explain how it is built:

The first thing to do is to create the model for the user so as to have a skeleton of the database and include our required fields in it. The user model looks like:

export default ModelBase.extend({
  email        : attr('string'),
  password     : attr('string'),
  isVerified   : attr('boolean', { readOnly: true }),
  isSuperAdmin : attr('boolean', { readOnly: true }),
  isAdmin      : attr('boolean', { readOnly: true }),

  firstName : attr('string'),
  lastName  : attr('string'),
  details   : attr('string'),
  contact   : attr('string'),
});

Above is the user’s model, however just the related fields are included here(there are more fields in user’s model). The above code shows that email and contact are two attributes which will accept ‘string’ values as inputs. We have a contact-info form located at ‘settings/contact-info’ route. It has two input fields as follows:

<form class="ui form {{if isLoading 'loading'}}" {{action 'submit' on='submit'}} novalidate>
  <div class="field">
    <label>{{t 'Email'}}</label>
    {{input type='email' name='email' value=data.email}}
  </div>
  <div class="field">
    <label>{{t 'Phone'}}</label>
    {{input type='text' name='phone' value=data.contact}}
  </div>
  <button class="ui teal button" type="submit">{{t 'Save'}}</button>
</form>

The form has a submit button which triggers the submit action. We redirect the submit action from the component to the controller so as to maintain ‘Data down, actions up’. The code is irrelevant, hence not shown here. Following is the action which is used to update the user which we are handling in the contact-info.js controller.

updateContactInfo() {
      this.set('isLoading', true);
      let currentUser = this.get('model');
      currentUser.save({
        adapterOptions: {
          updateMode: 'contact-info'
        }
      })
        .then(user => {
          this.set('isLoading', false);
          let userData = user.serialize(false).data.attributes;
          userData.id = user.get('id');
          this.get('authManager').set('currentUserModel', user);
          this.get('session').set('data.currentUserFallback', userData);
          this.get('notify', 'Updated information successfully');
        })
        .catch(() => {
        });
    }

We are returning the current user’s model from the route’s model method and storing it into ‘currentUser’ variable. Since we have data binding ember inputs in our contact-info form, the values will be grabbed automatically once the form submits. Thus, we can call ‘save’ method on the ‘currentUser’ model and pass an object called ‘adapterOptions’ which has key ‘updateMode’. We send this key to the ‘user’ serializer so that it picks only the attributes to be updated and omits the other ones. We have customized our user serializer as:

if (snapshot.adapterOptions && snapshot.adapterOptions.updateMode === 'contact-info') {
        json.data.attributes = pick(json.data.attributes, ['email', 'contact']);
}

The ‘save’ method on the ‘currentUser’ ‘currentUser.save()’ returns a promise. We resolve the promise by setting the ‘currentUserModel’ as the updated ‘user’ as returned by the promise. Thus, we are able to update the email and contact-info using the JSON API.

Resources:

Ember data official guide

Blog on models and Ember data by Embedly

Checking Image Size to Avoid Crash in Android Apps

In Giggity app a user can create a shortcut for the event by clicking on “home shortcut” button in the navigation drawer. Open Event format provides the logo URL in the return data so we do not need to provide it separately in the app’s raw file.

Sometimes the image can be too big to be put on screen as icon for shortcut. In this blog I describe a very simple method to check if we should use the image or not to avoid the crash and pixelation due to resolution.

We can store the image received in bitmap format. A bitmap is a type of memory organization or image file format used to store digital images. The term bitmap comes from the computer programming terminology, meaning just a map of bits, a spatially mapped array of bits. By storing it in bitmap format we can easily get the necessary information about the image to check if it is suitable for use.

We can use the BitmapFactory class which provides several decoding methods like (decodeByteArray(), decodeFile(), decodeResource(), etc.) for creating a Bitmap from various sources. Choose the most appropriate decode method based on your image data source. These methods attempt to allocate memory for the constructed bitmap and therefore can easily result in an OutOfMemory exception. Each type of decode method has additional signatures that let you specify decoding options via the BitmapFactory.Options class. Setting the inJustDecodeBounds property to true while decoding avoids memory allocation, returning null for the bitmap object but setting outWidth, outHeight and outMimeType. This technique allows you to read the dimensions and type of the image data prior to construction (and memory allocation) of the bitmap.

BitmapFactory.Options options = new BitmapFactory.Options();
options.inJustDecodeBounds = true;
BitmapFactory.decodeResource(getResources(), R.id.myimage, options);
int imageHeight = options.outHeight;
int imageWidth = options.outWidth;
String imageType = options.outMimeType;

To avoid java.lang.OutOfMemory exceptions, check the dimensions of a bitmap before decoding it, unless you absolutely trust the source to provide you with predictably sized image data that comfortably fits within the available memory.

So here is the particular example from Giggity app, it avoids crash on the recieving a large image for the icon. So once we store the the image in bitmap format we check if the height and width of the icon is exceeding the maximum limit.

public Bitmap getIconBitmap() {

 InputStream stream = getIconStream();
 Bitmap ret = null;

 if (stream != null) {
 ret = BitmapFactory.decodeStream(stream);
 if (ret == null) {
 Log.w("getIconBitmap", "Discarding unparseable file");
 return null;
 }
 if (ret.getHeight() > 512 || ret.getHeight() != ret.getWidth()) {
 Log.w("getIconBitmap", "Discarding, icon not square or >512 pixels");
 return null;
 }
 if (!ret.hasAlpha()) {
 Log.w("getIconBitmap", "Discarding, no alpha layer");
 return null;
 }
 }
 
 return ret;
}

If it does then we can avoid the icon. In this case we check if the icon is more than 512 pixels in height and width. If it is so then we could avoid it.

We could also check if the icon has a transparent background by using “hasAlpha” so we could have uniformity in the icons displayed on the screen. In the final result you can see the icon of the TUBIX 2017 conference added on the screen as it was following all those defined criterias.

Now that the image dimensions are known, they can be used to decide if the full image should be loaded into memory or if a subsampled version should be loaded instead. Here are some factors to consider:

  • Estimated memory usage of loading the full image in memory.
  • Amount of memory you are willing to commit to loading this image given any other memory requirements of your application.
  • Dimensions of the target ImageView or UI component that the image is to be loaded into.
  • Screen size and density of the current device.

For example, it’s not worth loading a 1024×768 pixel image into memory if it will eventually be displayed in a 128×96 pixel thumbnail in an ImageView.

 

References:

Upgrading the Style and Aesthetic of an Android App using Material Design

I often encounter apps as I add Open Event format support that don’t follow current design guidelines. Earlier styling an app was a tough task as the color and behaviour of the views needed to be defined separately. But now as we move forward to advanced styling methods we can easily style our app.

I recently worked on upgrading the user interface of Giraffe app after adding our Open Event support. See the repository to view the code for more reference. Here I follow the same procedure to upgrade the user interface.

First we add essential libraries to move with our material aesthetic. The Appcompat library provides backward compatibility.

//Essential Google libraries
compile 'com.android.support:appcompat-v7:25.3.1'
compile 'com.android.support:design:25.3.1

Then we define an XML file in the values folder for the style of the app which we get through Appcompat library. We could inherit same style in the entire app or separate style for the particular activity.

<resources>

   <!-- Base application theme. -->
   <style name="AppTheme" parent="Theme.AppCompat.Light.NoActionBar">
       <!-- Customize your theme here. -->
       <item name="colorPrimary">@color/colorPrimary</item>
       <item name="colorPrimaryDark">@color/colorPrimaryDark</item>
       <item name="colorAccent">@color/colorAccent</item>
   </style>


   <style name="AlertDialogCustom" parent="Theme.AppCompat.Light.Dialog.Alert">
       <item name="colorPrimary">@color/colorPrimary</item>
       <item name="colorAccent">@color/colorAccent</item>
   </style>

</resources>

So now we can see the views made following the same color scheme and behaviour throughout the app following current design guidelines without any particular manipulation to each of them.

Tip: Don’t define values of colors separately for different views. Define them in colors.xml to use them everywhere. It becomes easier then to change in future if needed.

The app now uses Action Bar for the frequently used operations unlike the custom layout that was made earlier.

This is how Action Bar is implemented,

First declare the action bar in XML layout,

Tip: Define color of the bar two shades lighter than the status bar.

 <android.support.design.widget.AppBarLayout
             android:layout_width="match_parent"
             android:layout_height="wrap_content"
             android:background="@android:color/transparent"
             android:theme="@style/ThemeOverlay.AppCompat.Dark.ActionBar">
             <android.support.v7.widget.Toolbar

                 xmlns:app="http://schemas.android.com/apk/res-auto"         
                 android:id="@+id/toolbar_options"
                 android:layout_width="match_parent"
                 android:layout_height="?attr/actionBarSize"
                 android:background="@color/colorPrimary"
                 app:popupTheme="@style/ThemeOverlay.AppCompat.Dark">
                 
                 <TextView
                     android:layout_width="wrap_content"
                     android:layout_height="wrap_content"
                     android:text="@string/options"
                     android:textColor="@color/colorAccent"
                     android:textSize="20sp" />
              </android.support.v7.widget.Toolbar>

</android.support.design.widget.AppBarLayout>

Then you can use the action bar in the activity, use onCreateOptionsMenu() method to inflate options in the toolbar.

@Override
    public void onCreate(Bundle savedInstanceState) {
        ...

        setTitle("");
        title = (TextView) findViewById(R.id.titlebar);
        Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar_main);
        setSupportActionBar(toolbar);

        ...
    }

The menu that needs to be inflated will be like this for two button at the right end of the action bar for bookmarks and filter respectively,

<menu xmlns:android="http://schemas.android.com/apk/res/android"
      xmlns:app="http://schemas.android.com/apk/res-auto">
     <item
        android:id = "@+id/action_bookmark"
        android:icon = "@drawable/ic_bookmark"
        android:menuCategory = "secondary"
        android:title = "Bookmark"
        app:showAsAction = "ifRoom" />
 
     <item
         android:id = "@+id/action_filter"
         android:icon = "@drawable/ic_filter"
         android:menuCategory = "secondary"
         android:title = "Filter"
         app:showAsAction = "ifRoom" />
</menu>

To adapt the declared style further, Alert Dialogs are also modified to match the app’s theme, it’s style is defined along with the app’s style. See below

AlertDialog.Builder noFeedBuilder = new AlertDialog.Builder(context,R.style.AlertDialogCustom);
            noFeedBuilder.setMessage(R.string.main_no_feed_text)
                    .setTitle(R.string.main_no_feed_title)
                    .setPositiveButton(R.string.common_yes, new DialogInterface.OnClickListener() {
                  ...
            noFeedBuilder.show();

Here is an example of improvement, before and after we update the user interface and aesthetic of app in easy steps defined,

   

See this for all the changes made to step up the user interface of the app.

References:

 

How RSS Action Type is Implemented in SUSI Android

Important skills of SUSI.AI are to display web search queries, a map of any location and provide a list of relevant information of a topic. RSS action type is similar to websearch action type but when the web search is to be performed on the client side, it is denoted by websearch action type and when the web search is performed by the server itself, it is denoted by rss action type. In this blog, I will show you how rss action type is implemented in SUSI Android.

In case of RSS action type server searches the internet and using RSS feeds, returns an array of objects containing :

  • Title
  • Description
  • Link
{
  “title”: “dog-doh: Definitions Index”,
  “description”: “dog-doh: Definitions Index. dog dog and pony show dog biscuit dog collar dog days …”,
  “link”: “http://websters.yourdictionary.com/index/dog-doh/”,
}

title: Title related to user query

description: Description of user query.

link: If user want to know more information then user can use link to find more information.

How rss action type is parsed in SUSI Android

SUSI  reply in json format. It should be parsed properly to show it in android app. We used retrofit library developed by square to parse json data. Retrofit library parse data according to model class. We code model class according to expected json reply. For example, each susi response contains answer jsonarray. There are two jsonarray data and action inside answer jsonarray. We made a different model class for each jsonarray.

First model class is SusiResponse. We used this model class to parse ‘answers’ jsonarray.

@SerializedName(“answers”)
private List<Answer> answers = new ArrayList<>();

Here we used List<>  because ‘answers’ jsonarray contain a list of jsonarray and jsonobject. Answer is second model class. We used it to parse two important jsonarray ‘data’ and ‘action’. The ‘action’ attribute has information about action type.

public class Answer {
[email protected](“data”)
   private RealmList<Datum> data = new RealmList<>();
}

Here also we used the list because data jsonarray also contains a list of jsonobject but instead of simple list we used RealmList<> because after parsing we save data using realm. ‘data’ jsonarray contain multiple jsonobject and each jsonobject contain three important information ‘title’, ‘description’ and ‘link’.

Datum class is the main model class which is used to save and retrieve ‘title’, ‘description’ and ‘link’. setTitle, setLink and setDescription method of Datum class are used to save ‘title’, ‘description’ and ‘link’ and getTitle, getDescription and getLink method are use to retrieve ‘title’, ‘description’ and ‘link’.

How rss action type data is retrieved and saved

As already mentioned we used retrofit to retrieve data and realm to save data. susiResponse is response we received from SUSI server. We used susiResponse to retrieve a list of Datum class type data.

List<Datum> datumList = susiResponse.getAnswers().get(0).getData();

We then loop through datumList and from each element we extract ‘title’, ‘description’ and ‘link’ using getTitle(), getDescription() and getLink() method respectively. Datum class is model class for both retrofit and realm. realmDatum is instance of Datum class and datumRealmList is an instance of RealmList of Datum class type. After extracting data we save data using setTitle(), setDescription() and setLink().

for (Datum datum : datumList) {
          Datum realmDatum = bgRealm.createObject(Datum.class);
          realmDatum.setDescription(datum.getDescription());
          realmDatum.setLink(datum.getLink());
          realmDatum.setTitle(datum.getTitle());
         datumRealmList.add(realmDatum);
       }      

Layout design to show rss action type reply

There are three textview with id ‘title’, ‘description’ and ‘link’ to show ‘title’, ‘description’ and ‘link’ retrieved from SUSI’s reply. We used recyclerview to show list of results.

Datum datum = datumList.get(position);

holder.titleTextView.setText(Html.fromHtml(datum.getTitle()));

holder.descriptionTextView.setText(Html.fromHtml(datum.getDescription()));

holder.descriptionTextView.setText(Html.fromHtml(datum.getDescription()));

Resources

Importing the Open Event format in Giraffe

Giraffe is a personal conference schedule tool for Android. Most conferences, bar camps and similar events offer their plan of sessions and talks in the iCal format for importing into your calendar. However importing a whole session plan into your standard calendar renders it pretty much useless for anything else. Giraffe allows users to import the schedule into a separate list giving you a simple overview on what happens on the conference. Besides the session, title, date and  time it also lists the speaker, location and description if available in the iCal URL. Sessions can be bookmarked and the list can be filtered by favourites and upcoming talks.

Recently I added the support for Open Event JSON format along with iCal. In this blog I describe the simple steps you need to follow to see the event that is created in the Open Event server in the Giraffe app. The initial steps are similar to Giggity app,

 1. Go to your event dashboard

2. Click on the export button.

3. Select sessions from the dashboard and copy the URL.

 

4. Click on the “Giraffe” button on the toolbar and paste the link in the box following. App will ask you to paste it when the first time you open it. Here the app loads the data and checks few initial character to see which kind of data is received. Find my other blog post to solve that problem here.

The app uses separate data models for iCal and JSON to store the informations received and then save them in SQL database for CRUD options. See the database activity here

   

5. Now you can see the sessions. Click on them to see more information or bookmark them if needed. The data is loaded from the database so when app is offline so we don’t need to worry about connection once the data is being loaded.

   

Resources:

 

Open Event Server: Testing Image Resize Using PIL and Unittest

FOSSASIA‘s Open Event Server project uses a certain set of functions in order to resize image from its original, example to thumbnail, icon or larger image. How do we test this resizing of images functions in Open Event Server project? To test image dimensions resizing functionality, we need to verify that the the resized image dimensions is same as the dimensions provided for resize.  For example, in this function, we provide the url for the image that we received and it creates a resized image and saves the resized version.

def create_save_resized_image(image_file, basewidth, maintain_aspect, height_size, upload_path,
                              ext='jpg', remove_after_upload=False, resize=True):
    """
    Create and Save the resized version of the background image
    :param resize:
    :param upload_path:
    :param ext:
    :param remove_after_upload:
    :param height_size:
    :param maintain_aspect:
    :param basewidth:
    :param image_file:
    :return:
    """
    filename = '{filename}.{ext}'.format(filename=get_file_name(), ext=ext)
    image_file = cStringIO.StringIO(urllib.urlopen(image_file).read())
    im = Image.open(image_file)

    # Convert to jpeg for lower file size.
    if im.format is not 'JPEG':
        img = im.convert('RGB')
    else:
        img = im

    if resize:
        if maintain_aspect:
            width_percent = (basewidth / float(img.size[0]))
            height_size = int((float(img.size[1]) * float(width_percent)))

        img = img.resize((basewidth, height_size), PIL.Image.ANTIALIAS)

    temp_file_relative_path = 'static/media/temp/' + generate_hash(str(image_file)) + get_file_name() + '.jpg'
    temp_file_path = app.config['BASE_DIR'] + '/' + temp_file_relative_path
    dir_path = temp_file_path.rsplit('/', 1)[0]

    # create dirs if not present
    if not os.path.isdir(dir_path):
        os.makedirs(dir_path)

    img.save(temp_file_path)
    upfile = UploadedFile(file_path=temp_file_path, filename=filename)

    if remove_after_upload:
        os.remove(image_file)

    uploaded_url = upload(upfile, upload_path)
    os.remove(temp_file_path)

    return uploaded_url


In this function, we send the
image url, the width and height to be resized to, and the aspect ratio as either True or False along with the folder to be saved. For this blog, we are gonna assume aspect ratio is False which means that we don’t maintain the aspect ratio while resizing. So, given the above mentioned as parameter, we get the url for the resized image that is saved.
To test whether it has been resized to correct dimensions, we use Pillow or as it is popularly know, PIL. So we write a separate function named getsizes() within which get the image file as a parameter. Then using the Image module of PIL, we open the file as a JpegImageFile object. The JpegImageFile object has an attribute size which returns (width, height). So from this function, we return the size attribute. Following is the code:

def getsizes(self, file):
        # get file size *and* image size (None if not known)
        im = Image.open(file)
        return im.size


As we have this function, it’s time to look into the unit testing function. So in unit testing we set dummy width and height that we want to resize to, set aspect ratio as false as discussed above. This helps us to test that both width and height are properly resized. We are using a creative commons licensed image for resizing. This is the code:

def test_create_save_resized_image(self):
        with app.test_request_context():
            image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'
            width = 500
            height = 200
            aspect_ratio = False
            upload_path = 'test'
            resized_image_url = create_save_resized_image(image_url_test, width, aspect_ratio, height, upload_path, ext='png')
            resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
            resized_width, resized_height = self.getsizes(resized_image_file)


In the above code from
create_save_resized_image, we receive the url for the resized image. Since we have written all the unittests for local settings, we get a url with localhost as the server set. However, we don’t have the server running so we can’t acces the image through the url. So we build the absolute path to the image file from the url and store it in resized_image_file. Then we find the sizes of the image using the getsizes function that we have already written. This  gives us the width and height of the newly resized image. We make an assertion now to check whether the width that we wanted to resize to is equal to the actual width of the resized image. We make the same check with height as well. If both match, then the resizing function had worked perfectly. Here is the complete code:

def test_create_save_resized_image(self):
        with app.test_request_context():
            image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'
            width = 500
            height = 200
            aspect_ratio = False
            upload_path = 'test'
            resized_image_url = create_save_resized_image(image_url_test, width, aspect_ratio, height, upload_path, ext='png')
            resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
            resized_width, resized_height = self.getsizes(resized_image_file)
            self.assertTrue(os.path.exists(resized_image_file))
            self.assertEqual(resized_width, width)
            self.assertEqual(resized_height, height)


In open event orga server, we use this resize function to basically create 3 resized images in various modules, such as events, users,etc. The 3 sizes are names – Large, Thumbnail and Icon. Depending on the one more suitable we use it avoiding the need to load a very big image for a very small div. The exact width and height for these 3 sizes can be changed from the admin settings of the project. We use the same technique as mentioned above. We run a loop to check the sizes for all these. Here is the code:

def test_create_save_image_sizes(self):
        with app.test_request_context():
            image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'
            image_sizes_type = "event"
            width_large = 1300
            width_thumbnail = 500
            width_icon = 75
            image_sizes = create_save_image_sizes(image_url_test, image_sizes_type)

            resized_image_url = image_sizes['original_image_url']
            resized_image_url_large = image_sizes['large_image_url']
            resized_image_url_thumbnail = image_sizes['thumbnail_image_url']
            resized_image_url_icon = image_sizes['icon_image_url']

            resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
            resized_image_file_large = app.config.get('BASE_DIR') + resized_image_url_large.split('/localhost')[1]
            resized_image_file_thumbnail = app.config.get('BASE_DIR') + resized_image_url_thumbnail.split('/localhost')[1]
            resized_image_file_icon = app.config.get('BASE_DIR') + resized_image_url_icon.split('/localhost')[1]

            resized_width_large, _ = self.getsizes(resized_image_file_large)
            resized_width_thumbnail, _ = self.getsizes(resized_image_file_thumbnail)
            resized_width_icon, _ = self.getsizes(resized_image_file_icon)

            self.assertTrue(os.path.exists(resized_image_file))
            self.assertEqual(resized_width_large, width_large)
            self.assertEqual(resized_width_thumbnail, width_thumbnail)
            self.assertEqual(resized_width_icon, width_icon)

Resources:

Know the Usage of all Emoji’s on Twitter with loklak Emoji Heatmap

Loklak apps page now has a new app in its store, Emoji Heatmap. This app can be used to see the usage of all the emoji’s in the tweets all over the world in the form of heatmap. So, the major difference between the emoji heatmap and emoji heatmapper apps are heatmapper shows the tweets related to specific search query whereas this heatmapper app, it displays all the tweets which contains emojis.

How do the App fetches and stores the locations

The emoji heatmap uses the loklak search API . The search API needs a query in order to search and output the JSON data. But this app takes no input from the user end to search any query. To make the search dynamic, we are using an existing JSON file from emojiHeatmapper app and loklak-fetcher-client javascript file. From the emoji.json file, we collect the each query item and search it using loklak-fetcher-client. The output json file which we get from loklak-fetcher-client is retrieved into the emojiHeatmap and we extract the location parameter. The location parameter is then stored into the “feature” option of open layers 3 maps.

So, here in the emoji Heatmap app, we iterate over the emoji.json, get different search query each time when we search for it using loklak search API.

Code which adds the location retrieved into feature

  $.getJSON("../emojiHeatmapper/emoji.json", function(json) {
    for (var i = 0; i < json.data.length; i++){
      var query = json.data[i][1];
      // Fetch loklak API data, and fill the vector
      loklakFetcher.getTweets(query, function(tweets) {
        for(var i = 0; i < tweets.statuses.length; i++) {
          if(tweets.statuses[i].location_point !== undefined){
          // Creation of the point with the tweet's coordinates
          //  Coords system swap is required: OpenLayers uses by default
          //  EPSG:3857, while loklak's output is EPSG:4326
            var point = new ol.geom.Point(ol.proj.transform(tweets.statuses[i].location_point, 'EPSG:4326', 'EPSG:3857'));
            vector.addFeature(new ol.Feature({  // Add the point to the data vector
              geometry: point,
              weight: 20
            }));
          }
        }
      });
    }
  });

 

The above function gets has two variables query and point. The query variable stores the data that is being retrieved from the emoji.json file each time it iterates and that query is being sent into the loklak-fetcher-client. Then the point variable is in which the location tracked using the loklak search API is converted into the co-ordinates system followed by the Open Layers 3. Then the point is added as a feature to the map vector. The map vector is the place where all the features are stored and then appended onto the map as a heatmap.

Resources