Using Travis CI to Generate Sample Apks for Testing in Open Event Android

In the Open Event Android app we were using Travis already to push an apk of the Android app to the apk branch for easy testing after each commit in the repo. A better way to test the dynamic nature of the app would be to use the samples of different events from the Open Event repo to generate an apk for each sample. This could help us identify bugs and inconsistencies in the generator and the Android app easily. In this blog I will be talking about how this was implemented using Travis CI.

What is a CI?

Continuous Integration is a DevOps software development practice where developers regularly push their code changes into a central repository. After the merge automated builds and tests are run on the code that has been pushed. This helps developers to identify bugs in code quite easily. There are many CI’s available such as Travis, Codecov etc.

Now that we are all caught up with let’s dive into the code.

Script for replacing a line in a file (configedit.sh)

The main role of this script would be to replace a line in the config.json file. Why do we need this? This would be used to reconfigure the Api_Link in the config.json file according to our build parameters. If we want the apk for Mozilla All Hands 2017 to be built, I would use this  script to replace the Api_Link in the config.json file to the one for Mozilla All Hands 2017.

This is what the config.json file for the app looks like.

{
   "Email": "dev@fossasia.org",
   "App_Name": "Open Event",
   "Api_Link": "https://eventyay.com/api/v1/events/6/"
 }

We are going to replace line 4 of this file with

“Api_Link”:”https://raw.githubusercontent.com/fossasia/open-event/master/sample/MozillaAllHands17

VAR=0
 STRING1=$1
 while read line
 do
 ((VAR+=1))
 if [ "$VAR" = 4 ]; then
 echo "$STRING1"
 else
 echo "$line"
 fi
 done < app/src/main/assets/config.json

The script above reads the file line by line. If it reaches line 4 it would print out the string that was given into the script as a parameter else it just prints out the line in the file.

This script would print the required file for us in the terminal if called but NOT create the required file. So we redirect the output of this file into the same file config.json.

Now let’s move on to the main script which is responsible for the building of the apks.

Build Script(generate_apks.sh)

Main components of the script

  • Build the apk for the default sample i.e FOSSASIA17 using the build scripts ./gradlew build and ./gradlew assembleRelease.
  • Store all the Api_Links and apk names for which we need the apks for different arrays
  • Replace the Api_Link in the json file found under android/app/src/main/assets/config.json using the configedit.sh.
  • Run the build scripts ./gradlew build and ./gradlew assembleRelease to generate the apk.
  • Move the generated apk from app/build/outputs/apk/ to a folder called daily where we store all the generated apks.
  • We then repeat this process for the other Api_Links in the array.

As of now we are generating the apks for the following events:

  1. FOSSASIA 17
  2. Mozilla All Hands 17
  3. Google I/O 17
  4. Facebook Developer Conference 17

Care is also taken to avoid all these builds if it is a PR. All the apks are generated only when there is a commit on the development branch i.e when the PR is merged.

Usage of Scripts in .travis.yml

To add Travis integration for the repo we need to include a file named .travis.yml in the repo which indicates Travis CI what to build.

language: android
 ….
 jdk: oraclejdk8
 ….
 before_script:
   - cd android
 script:
   - chmod +x generate_apks.sh
   - chmod +x configedit.sh
   - ./generate_apks.sh
 …..
 after_success:
   - bash <(curl -s https://codecov.io/bash)
   - cd ..
   - chmod +x upload-apk.sh
   - ./upload-apk.sh

In this file we need to define the language for which Travis will build. Here we indicate that it is android. We also specify the jdk version to be used.

Now let’s talk about the other parts of this snippet.

  • before_script : Executes the bash instructions before the travis build starts. Here we do cd android so that we can access gradlew for building the apk.
  • script : This section consists of the instruction to be executed for the build. Here we give executable rights to the two scripts that we have written sh and . Then ./generate_apks is called and the project build starts. All the apks get saved to the folder daily.
  • after_success : This section consists the instructions that are run after the script executes successfully. Here we see that we run a script called sh. This script is responsible of pushing the generated apk files in an orphan branch called apk.

Some points of Interest

  • If the user/developer testing the apk is in the offline state and then comes online there will be database inconsistencies as data from the local assets as well as the data from the Api_Link would appear in the app.
  • When the app generator CLI is ready we can use it to trigger the builds instead of just replacing the Api_Link. This would also be effective in testing the app generator simultaneously.

Now we have everything setup to trigger builds for various samples after each commit.

Resources

Continue ReadingUsing Travis CI to Generate Sample Apks for Testing in Open Event Android

Hiding Intelligence of Susper When a Query is Empty or Erased with Angular

Recently, we have implemented intelligence feature in Susper using SUSI chat API to provide users answer a question without going deeper in search results. When a user types “height of Trump”, it shows answer like this:

Problem which we faced after implementing the feature:

When a user was erasing a query or query field was empty, Susper was still showing the answer of the intelligence component like this:

The answer should not be displayed when a query is empty because the user is not asking any question. The answer was still displayed because it had received a response from SUSI API.

How did we solve the problem?

The problem was solved in two ways.

  1. By using if/else condition: We checked if the statement shown inside the component is similar to the if-and-else condition. If the condition is true, it should hide the component.
  2. Using [hidden] attribute method: The Angular 4 supports [hidden] attribute which acts as { display:none; } . [hidden] attribute generally works as ngShow and ngHide which was earlier supported by Angular 2.

We preferred both the methods to solve the problem. The intelligence component is being loaded inside results component using <app-intelligence> element. Further, we added [hidden] attribute to this element like this :

<appintelligence [hidden]=“hideIntelligence”></app-intelligence>
We created hideIntelligence as variable and assign it as boolean. To check if a query is empty, searchdata variable was used.
searchdata: any = {
  query: ‘ ‘,
  rows: 10,
  start: 0
};
And then checked if a query is empty using if-else condition :
// checks if query is empty or erased
if (this.searchdata.query === ‘ ‘) {// display: none; is true
  this.hideIntelligence = true;

} else {
// display: none; is false
  this.hideIntelligence = false;
}

 

Applying this solution, we succeeded in hiding the intelligence component. We would also had used *ngIf statement but we preferred using [hidden]. [hidden] modifies the display property.  *ngIf is a structural directive which creates or destroys content inside DOM.

The source code for the implementation can be found here: https://github.com/fossasia/susper.com/pull/613

Resources:

 

Continue ReadingHiding Intelligence of Susper When a Query is Empty or Erased with Angular

Adding Manual ISO Controls in Phimpme Android

The Phimpme Android application comes with a well-featured camera to take high resolution photographs. It features an auto mode in the camera as well as a manual mode for users who likes to customise the camera experience according to their own liking. It provides the users to select from the range of ISO values supported by their devices with a manual mode to enhance the images in case the auto mode fails on certain circumstances such as low lighting conditions.

In this tutorial, I will be discussing how we achieved this in Phimpme Android with some code snippets and screenshots.

To provide the users with an option to select from the range of ISO values, the first thing we need to do is scan the phone for all the supported values of ISO and store it in an arraylist to be used to display later on. This can be done by the snippet provided below:

String iso_values = parameters.get("iso-values");
if( iso_values == null ) {
 iso_values = parameters.get("iso-mode-values"); // Galaxy Nexus
 if( iso_values == null ) {
    iso_values = parameters.get("iso-speed-values"); // Micromax A101
    if( iso_values == null )
       iso_values = parameters.get("nv-picture-iso-values"); // LG dual P990

Every device supports a different set of keyword to provide the list of ISO values. Hence, we have tried to add every possible keywords to extract the values. Some of the keywords used above covers almost 90% of the android devices and gets the set of ISO values successfully.

For the devices which supports the ISO values but doesn’t provide the keyword to extract the ISO values, we can provide the standard list of ISO values manually using the code snippet provided below:

values.add("200");
values.add("400");
values.add("800");
values.add("1600");

After extracting the set of ISO values, we need to create a list to display to the user and upon selection of the particular ISO value as depicted in the Phimpme camera screenshot below

Now to set the selected ISO value, we first need to get the ISO key to set the ISO values as depicted in the code snippet provided below:

if( parameters.get(iso_key) == null ) {
 iso_key = "iso-speed"; // Micromax A101
 if( parameters.get(iso_key) == null ) {
    iso_key = "nv-picture-iso"; // LG dual P990
    if( parameters.get(iso_key) == null ) {
       if ( Build.MODEL.contains("Z00") )
          iso_key = "iso"; // Asus Zenfone 2 Z00A and Z008

Getting the key to set the ISO values is similar to getting the key to extract the ISO values from the device. The above listed ISO keys to set the values covers most of the devices.

Now after we have got the ISO key, we need to change the camera parameter to reflect the selected change.

parameters.set(iso_key, supported_values.selected_value);
setCameraParameters(parameters);

To get the full source code on how to set the ISO values manually, please refer to the Phimpme Android repository.

Resources

  1. Stackoverflow – Keywords to extract ISO values from the device: http://stackoverflow.com/questions/2978095/android-camera-api-iso-setting
  2. Open camera Android source code: https://sourceforge.net/p/opencamera/code/ci/master/tree/
  3. Blog – Learn more about ISO values in photography: https://photographylife.com/what-is-iso-in-photography
Continue ReadingAdding Manual ISO Controls in Phimpme Android

Adding Event Overview Route in Open Event Frontend

In Open Event Frontend we have an event overview route which is like a mini dashboard for an event where information regarding event sponsors, general info, roles, tickets, event setup etc. is present. All of the information is present in their corresponding components and this dashboard is made up of those components. To create this dashboard we will first create its components.

To create a component we will use following ember command-

ember -g component <component-name>

This command will give us three files: a template, a component and a test file corresponding to that component. We will use this command to generate all our components.

Now let’s discuss each component separately and see how many of them are combined to form this route-

The event-setup-checklist component contains semantic ui’s steps to maintain checklist of basic-details, sponsors, session & microlocation, call for speakers, session and speakers form customization so that it becomes easy to identify which step is complete and which is not.

Next is general-info component which shows basic information about an event like start-time, end-time, location, number of speakers, number of sponsors etc. It also shows whether the event is live or not.

In manage-roles component, manage the role for a given person, add people and assign different roles to them, edit roles for different people. Also we can see who are invited for a given role and who accepted them.

In event-sponsors component we manage the sponsors for the event, edit an existing sponsor, add a new sponsor with their logo, name, type and level. Also we can delete an existing sponsor.

Next is the ticket component which displays the details of number of orders, number of tickets sold, and total sales. Also it displays the number of types of tickets are sold.

Next is our app-component which has two choices. First is to generate android app for the event and second is to generate webapp of the event.

And finally in our view.index template, we add these components using ui stackable grid layout. Whenever we want to conditionally show or hide a component, we can do that in our event.index template and hence it becomes very easy to manage huge amounts content on a single page.

Resources

Continue ReadingAdding Event Overview Route in Open Event Frontend

File Upload Validations on Open Event Frontend

In Open Event Frontend we have used semantics ui’s form validations to validate different fields of a form. There are certain instances in our app where the user has to upload a file and it is to be validated against the suggested format before uploading it to the server. Here we will discuss how to perform the validation.

Semantics ui allows us to validate by facilitating pass of an object along with rules for its validation. For fields like email and contact number we can pass type as email and number respectively but for validation of file we have to pass a regular expression with allowed extension.The following walks one through the process.

fields : {
  file: {
    identifier : 'file',
    rules      : [
      {
        type   : 'empty',
        prompt : this.l10n.t('Please upload a file')
      },
      {
        type   : 'regExp',
        value  : '/^(.*.((zip|xml|ical|ics|xcal)$))?[^.]*$/i',
        prompt : this.l10n.t('Please upload a file in suggested format')
      }
    ]
  }
}

Here we have passed file element (which is to be validated) inside our fields object identifier, which for this field is ‘file’, and can be identified by its id, name or data-validate property of the field element. After that we have passed an array of rules against which the field element is validated. First rule gives an error message in the prompt field in case of an empty field.

The next rule checks for allowed file extensions for the file. The type of the rule will be regExp as we are passing a regular expression which is as follows-

/^(.*.((zip|xml|ical|ics|xcal)$))?[^.]*$/i

It is little complex to explain it from the beginning so let us breakdown it from end-

 

$ Matches end of the string
[^.]* Negated set. Match any character not in this set. * represents 0 or more preceding token
( … )? Represents if there is something before (the last ?)
.*.((zip|xml|ical|ics|xcal)$) This is first capturing group ,it contains tocken which are combined to create a capture group ( zip|xml|ical|ics|xcal ) to extract a substring
^ the beginning of the string

Above regular expression filters all the files with zip/xml/ical/xcal extensions which are the allowed format for the event source file.

References

  • Ivaylo Gerchev blog on form validation in semantic ui
  • Drmsite blog on semantic ui form validation
  • Semantic ui form validation docs
  • Stackoverflow regex for file extension
Continue ReadingFile Upload Validations on Open Event Frontend

Adding JSONAPI Support in Open Event Android App

The Open Event API Server exposes a well documented JSONAPI compliant REST API that can be used in The Open Even App Generator and Frontend to access and manipulate data. So it is also needed to add support of JSONAPI in external services like The Open Even App Generator and Frontend. In this post I explain how to add JSONAPI support in Android.

There are many client libraries to implement JSONAPI support in Android or Java like moshi-jsonapi, morpheus etc. You can find the list here. The main problem is most of the libraries require to inherit attributes from Resource model but in the Open Event Android App we already inherit from a RealmObject class and in Java we can’t inherit from more than one model or class. So we will be using the jsonapi-converter library which uses annotation processing to add JSONAPI support.

1. Add dependency

In order to use jsonapi-converter in your app add following dependencies in your app module’s build.gradle file.

dependencies {
	compile 'com.github.jasminb:jsonapi-converter:0.7'
}

2.  Write model class

Models will be used to represent requests and responses. To support JSONAPI we need to take care of followings when writing the models.

  • Each model class must be annotated with com.github.jasminb.jsonapi.annotations.Type annotation
  • Each class must contain a String attribute annotated with com.github.jasminb.jsonapi.annotations.Id annotation
  • All relationships must be annotated with com.github.jasminb.jsonapi.annotations.Relationship annotation

In the Open Event Android we have so many models like event, session, track, microlocation, speaker etc. Here I am only defining track model because of its simplicity and less complexity.

@Type("track")
public class Track extends RealmObject {

        	@Id(IntegerIdHandler.class)
        	private int id;
        	private String name;
        	private String description;
        	private String color;
        	private String fontColor;
        	@Relationship("sessions")
        	private RealmList<Session> sessions;

        	//getters and setters
}

Jsonapi-converter uses Jackson for data parsing. To know how to use Jackson for parsing follow my previous blog.

Type annotation is used to instruct the serialization/deserialization library on how to process given model class. Each resource must have the id attribute. Id annotation is used to flag an attribute of a class as an id attribute. In above class the id attribute is int so we need to specify IntegerIdHandler class which is ResourceHandler in the annotation. Relationship annotation is used to designate other resource types as a relationship. The value in the Relationship annotation should be as per JSONAPI specification of the server. In the Open Event Project each track has the sessions so we need to add a Relationship annotation for it.

3.  Setup API service and retrofit

After defining models, define API service interface as you would usually do with standard JSON APIs.

public interface OpenEventAPI {
    @GET("tracks?include=sessions&fields[session]=title")
    Call<List<Track>> getTracks();
}

Now create an ObjectMapper & a retrofit object and initialize them.

ObjectMapper objectMapper = OpenEventApp.getObjectMapper();
Class[] classes = {Track.class, Session.class};

OpenEventAPI openEventAPI = new Retrofit.Builder()
                    .client(okHttpClient)
                    .baseUrl(Urls.BASE_URL)
                    .addConverterFactory(new JSONAPIConverterFactory(objectMapper, classes))
                    .build()
                    .create(OpenEventAPI.class);

 

The classes array instance contains a list of all the model classes which will be supported by this retrofit builder and API service. Here the main task is to add a JSONAPIConverterFactory which will be used to serialize and deserialize data according to JSONAPI specification. The JSONAPIConverterFactory constructor takes two parameters ObjectMapper and list of classes.

4.  Use API service  

Now after setting up all the things according to above steps, you can use the openEventAPI instance to fetch data from the server.

openEventAPI.getTracks();

Conclusion

JSON API is designed to minimize both the number of requests and the amount of data transmitted between clients and servers

Continue ReadingAdding JSONAPI Support in Open Event Android App

Implementing Intelligence Feature in Susper

Susper gives answers to your questions using SUSI AI. We want to give users best experience while they are searching for solutions to their questions. To achieve this, we have incorporated with features like infobox and intelligence using SUSI.

Google has this feature where users can ask questions like ‘Who is president of USA?’ and get answers directly without encouraging the users to deep-dive into the search results to know the answer.

Similarly Susper gives answer to the user:

It also gives answer to question which is related to real time data like temperature.

 

How we have implemented this feature?

We used the API Endpoint of SUSI at http://api.asksusi.com/

Using SUSI API is as simple as sending query as a URL parameter in GET request http://api.susi.ai/susi/chat.json?q=YOUR_QUERY

You can also get various action types in the response. Eg: An anwser type response for http://api.susi.ai/susi/chat.json?q=hey%20susi is:

actions: [
  {
    type: "answer",
    expression: "Hi, I'm Susi"
  }
],

 

Documentation regarding SUSI is available at here.

Implementation in Susper:

We have created an Intelligence component to display answer related to a question. You can check it here: https://github.com/fossasia/susper.com/tree/master/src/app/intelligence

It takes care about rendering the information and styling of the rendered data received from SUSI API.

The intelligence.component.ts makes a call to Intelligence Service with the required query and the intelligence service makes a GETrequest to the SUSI API and retrieves the results.

Intelligence.component.ts

this.intelligence.getintelligentresponse(data.query).subscribe(res => {
  if (res && res.answers && res.answers[0].actions) {
     this.actions = res.answers[0].actions;
       for (let action of this.actions) {
         if (action.type === 'answer' && action.mood !== 'sabta') {
           this.answer = action.expression;
         } else {
             this.answer = '';
         }
      }
   } else {
       this.answer = '';
   }
});

 

Intelligence.service.ts

export class IntelligenceService {
 server = 'http://api.susi.ai';
 searchURL = 'http://' + this.server + '/susi/chat.json';
 constructor(private http: Http, private jsonp: Jsonp, private store: Store<fromRoot.State>) {
 }
 getintelligentresponse(searchquery) {
   let params = new URLSearchParams();
   params.set('q', searchquery);
   params.set('callback', 'JSONP_CALLBACK');
   return this.jsonp
     .get('http://api.asksusi.com/susi/chat.json', {search: params}).map(res =>
       res.json()

     );
 }

Whenever the getintelligenceresponse of intelligenceService is called, it creates a URLSearchParams() object and set required parameters in it and send them in jsonp.get request. We also set callback to ‘JSONP_CALLBACK’ to inform the API to send us data in JSONP.

Thereby, the intelligence component retrieves the answer and displays it with search resultson Susper.

Source code for this implementation could be found in this pull:

https://github.com/fossasia/susper.com/pull/569

Resources:

Continue ReadingImplementing Intelligence Feature in Susper

Registering Organizations’ Repositories for Continuous Integration with Yaydoc

Among various features implemented in Yaydoc was the introduction of a modal in the Web Interface used for Continuous Deployment. The modal was used to register user’s repositories to Yaydoc. All the registered repositories then had their documentation updated continuously at each commit made to the repository. This functionality is achieved using Github Webhooks.

The implementation was able to perform the continuous deployment successfully. However, there was a limitation that only the public repositories owned by a user could be registered. Repositories owned by some organisations, which the user either owned or had admin access to couldn’t be registered to Yaydoc.

In order to perform this enhancement, a select tag was added which contains all the organizations the user have authorized Yaydoc to access. These organizations were received from Github’s Organization API using the user’s access token.

/**
 * Retrieve a list of organization the user has access to
 * @param accessToken: Access Token of the user
 * @param callback: Returning the list of organizations
 */
exports.retrieveOrgs = function (accessToken, callback) {
  request({
    url: ‘https://api.github.com/user/orgs’,
    headers: {
      ‘User-Agent’: ‘request’,
      ‘Authorization’: ‘token ’ + accessToken
    }
  }, function (error, response, body) {
    var organizations = [];
    var bodyJSON = JSON.parse(body);
    bodyJSON.forEach(function (organization) {
      organizations.push(organization.login);
    });
    return callback(organizations);
  });
};

On selecting a particular organization from the select tag, the list of repositories is updated. The user then inputs a query in a search input which on submitting shows a list of repositories that matches the tag. An AJAX get request is sent to Github’s Search API in order to retrieve all the repositories matching the keyword.

$(function () {
  ....
$.get(`https://api.github.com/search/repositories?q=user:${username}+fork:true+${searchBarInput.val()}`, function (result) {
    ....
    result.items.forEach(function (repository) {
      options +=<option>+ repo.full_name +</option>’;
    });
    ....
  });
  ....
});

The selected repository is then submitted to the backend where the repository is registered in Yaydoc’s database and a hook is setup to Yaydoc’s CI, as it was happening with user’s repositories. After a successful registration, every commit on the user’s or organization’s repository sends a webhook on receiving which, Yaydoc performs the documentation generation and deployment process.

Resources:

  1. Github’s Organization API: https://developer.github.com/v3/orgs/
  2. Github’s Search API: https://developer.github.com/v3/search/
  3. Simplified HTTP Request Client: https://github.com/request/request
Continue ReadingRegistering Organizations’ Repositories for Continuous Integration with Yaydoc

Shifting from Java to Kotlin in SUSI Android

SUSI Android (https://github.com/fossasia/susi_android) is written in Java. After the announcement of Google to officially support Kotlin as a first class language for Android development we decided to shift to Kotlin as it is more robust and code friendly than Java.

Advantages of Kotlin over Java

  1. Kotlin is a null safe language. It changes all the instances used in the code to non nullable type thus it ensures that the developer don’t get any nullPointerException.
  2. Kotlin provides the way to declare Extensive function similar to that of C#. We can use this function in the same way as we use the member functions of our class.
  3. Kotlin also provides support for Lambda function and other high order functions.

For more details refer to this link.

After seeing the above points it is now clear that Kotlin is much more effective than Java and there is harm in switching the code from Java to Kotlin. Lets now see the implementation in Susi Android.

Implementation in Susi Android

In the Susi Android App we are implementing the MVP design with Kotlin. We are converting the code by one activity each time from java to Kotlin. The advantage here with Kotlin is that it is totally compatible with java at any time. Thus allowing the developer to change the code bit by bit instead of all at once.Let’s now look at SignUp Activity implementation in Susi Android.

The SignUpView interface contains all the function related to the view.

interface ISignUpView {


  fun alertSuccess()

  fun alertFailure()

  fun alertError(message: String)

  fun setErrorEmail()

  fun setErrorPass()

  fun setErrorConpass(msg: String)

  fun setErrorUrl()

  fun enableSignUp(bool: Boolean)

  fun clearField()

  fun showProgress()

  fun hideProgress()

  fun passwordInvalid()

  fun emptyEmailError()

  fun emptyPasswordError()

  fun emptyConPassError()


}

The SignUpActivity implements the view interface in the following way. The view is responsible for all the interaction of user with the UI elements of the app. It does not contain any business logic related to the app.

class SignUpActivity : AppCompatActivity(), ISignUpView {


  var signUpPresenter: ISignUpPresenter? = null

  var progressDialog: ProgressDialog? = null


  override fun onCreate(savedInstanceState: Bundle?) {

      super.onCreate(savedInstanceState)

      setContentView(R.layout.activity_sign_up)

      addListeners()

      setupPasswordWatcher()


      progressDialog = ProgressDialog(this@SignUpActivity)

      progressDialog?.setCancelable(false)

      progressDialog?.setMessage(this.getString(R.string.signing_up))


      signUpPresenter = SignUpPresenter()

      signUpPresenter?.onAttach(this)

  }


  fun addListeners() {

      showURL()

      hideURL()

      signUp()

  }


  override fun onOptionsItemSelected(item: MenuItem): Boolean {

      if (item.itemId == android.R.id.home) {

          finish()

          return true

      }

      return super.onOptionsItemSelected(item)

  }

Now we will see the implementation of models in Susi Android in Kotlin and compare it with Java.

Lets First see the implementation in Java

public class WebSearchModel extends RealmObject {

  private String url;

  private String headline;

  private String body;

  private String imageURL;


  public WebSearchModel() {

  }


  public WebSearchModel(String url, String headline, String body, String imageUrl) {

      this.url = url;

      this.headline = headline;

      this.body = body;

      this.imageURL = imageUrl;

  }


  public void setUrl(String url) {

      this.url = url;

  }


  public void setHeadline(String headline) {

      this.headline = headline;

  }


  public void setBody(String body) {

      this.body = body;

  }


  public void setImageURL(String imageURL) {

      this.imageURL = imageURL;

  }


  public String getUrl() {

      return url;

  }


  public String getHeadline() {

      return headline;

  }


  public String getBody() {

      return body;

  }


  public String getImageURL() {

      return imageURL;

  }

}
open class WebSearchModel : RealmObject {


  var url: String? = null


  var headline: String? = null


  var body: String? = null


  var imageURL: String? = null


  constructor() {}


  constructor(url: String, headline: String, body: String, imageUrl: String) {

      this.url = url

      this.headline = headline

      this.body = body

      this.imageURL = imageUrl

  }

}

You can yourself see the difference and how easily with the help of Kotlin we can reduce the code drastically.

For diving more into the code, we can refer to the GitHub repo of Susi Android (https://github.com/fossasia/susi_android).

Resources

Continue ReadingShifting from Java to Kotlin in SUSI Android

Implementing Toolbar(ActionBar) in SUSI Android

SUSI is an artificial intelligence for interactive chat bots. The SUSI Android app (https://github.com/fossasia/susi_android) has a toolbar, that allows different user preferences right up in the menu option. These include functionalities such as search, login, logout and rating the app. The material design toolbar provides a clean and efficient way to do these functions. We will see how we implemented this in the SUSI Android app.

To implement ActionBar in the Android app we will start with adding the dependency below to the gradle file.

compile `com.android.support:appcompat-v7:22.2.0` 

For more information regarding setting up ActionBar in your own project please refer to this link.

Now we will add the Toolbar in our XML file as shown below.

<android.support.v7.widget.Toolbar xmlns:app="http://schemas.android.com/apk/res-auto"

  style="@style/Theme.AppCompat"

  xmlns:android="http://schemas.android.com/apk/res/android"

  android:layout_width="match_parent"

  android:layout_height="wrap_content"

  android:background="?attr/colorPrimary"

  android:minHeight="@dimen/abc_action_bar_default_height_material">


  <ImageView

      android:id="@+id/toolbar_img"

      android:layout_width="@dimen/toolbar_logo_width"

      android:layout_height="@dimen/toolbar_logo_height"

      android:layout_gravity="center"

      app:srcCompat="@drawable/ic_susi_white" />


</android.support.v7.widget.Toolbar>

On Adding the above code we can see the preview of the Toolbar which appears like this.

 

Now we will see how to implement the menu options. In the menu_main.xml we can add options to be shown in the toolbar.

<menu xmlns:android="http://schemas.android.com/apk/res/android"

  xmlns:app="http://schemas.android.com/apk/res-auto"

  xmlns:tools="http://schemas.android.com/tools"

  tools:context=".activities.MainActivity">

  <item

      android:id="@+id/action_search"

      android:icon="@android:drawable/ic_menu_search"

      android:orderInCategory="200"

      android:title="@string/search"

      app:actionViewClass="android.support.v7.widget.SearchView"

      app:showAsAction="always" />

  <item

      android:id="@+id/down_angle"

      android:icon="@drawable/ic_arrow_down_24dp"

      android:title="Search Down"

      android:visible="false"

      app:showAsAction="always"/>

  <item

      android:id="@+id/up_angle"

      android:icon="@drawable/ic_arrow_up_24dp"

      android:title="Search Up"

      android:visible="false"

      app:showAsAction="always"/>

  <item

      android:id="@+id/action_settings"

      android:orderInCategory="100"

      android:title="@string/action_settings"

      app:showAsAction="never" />

  <item

      android:id="@+id/wall_settings"

      android:orderInCategory="100"

      android:title="@string/action_wall_settings"

      app:showAsAction="never" />

  <item

      android:id="@+id/action_share"

      android:orderInCategory="101"

      android:title="@string/action_share"

      app:showAsAction="never" />

We can define different items in the ActionBar as shown above. Also, we have the option by which we can set the visibility of the items. If the visibility of the item is false, then it will be shown in the overflow menu like this.

Now we will see how we can add functionalities to the options present in the menu ActionBar. To add the listeners to the options we have to override the function onCreateOptionsMenu and there we can add the logic for different options as shown below.

@Override

  public boolean onCreateOptionsMenu(final Menu menu) {

      this.menu = menu;

      getMenuInflater().inflate(R.menu.menu_main, menu);

      if(PrefManager.getBoolean(Constant.ANONYMOUS_LOGGED_IN, false)) {

          menu.findItem(R.id.action_logout).setVisible(false);

          menu.findItem(R.id.action_login).setVisible(true);

      } else if(!(PrefManager.getBoolean(Constant.ANONYMOUS_LOGGED_IN, false))) {

          menu.findItem(R.id.action_logout).setVisible(true);

          menu.findItem(R.id.action_login).setVisible(false);

      }


      searchView = (SearchView) MenuItemCompat.getActionView(menu.findItem(R.id.action_search));

      final EditText editText = (EditText)searchView.findViewById(android.support.v7.appcompat.R.id.search_src_text);

      searchView.setOnSearchClickListener(new View.OnClickListener() {

          @Override

          public void onClick(View view) {

              toolbarImg.setVisibility(View.GONE);

              ChatMessage.setVisibility(View.GONE);

              btnSpeak.setVisibility(View.GONE);

              sendMessageLayout.setVisibility(View.GONE);

              isEnabled = false;

          }

      });

}

For diving more into the code, we can refer to the GitHub repo of Susi Android (https://github.com/fossasia/susi_android).

Resources

Continue ReadingImplementing Toolbar(ActionBar) in SUSI Android