Displaying Avatar Image of Users using Gravatar on SUSI.AI

This blog discusses how the avatar of the user has been shown at different places in the UI like the app bar, feedback comments, etc using the Gravatar service on SUSI.AI. A Gravatar is a Globally Recognized Avatar. Your Gravatar is an image that follows you from site to site appearing beside your name when you do things like comment or post on a blog. So, the Gravatar service has been integrated in SUSI.AI, so that it helps identify the user via the avatar too.

Going through the implementation

  • The aim is to get an avatar of the user from the email id. For that purpose, Gravatar exposes a publicly available avatar of the user, which can be accessed via the following steps :
    • Creating the Hash of the email
    • Sending the image request
  • For creating the MD5 hash of the email, use the npm library md5. The function takes a string as input and returns the hash of the string.
  • Now, a URL is generated using this hash.
  • The URL format is https://www.gravatar.com/avatar/HASH, where ‘HASH’ is the hash of the email of the user. In case, the hash is invalid, Gravatar returns a default avatar image.
  • Also, append ‘.jpg’ to the URL to maintain image format consistency on the website. When, the generated URL is used in an <img> tag, it behaves like an image and an avatar is returned when the URL is requested by the browser.
  • It has been displayed on various instances in the UI like app bar , feedback comments section, etc. The implementation in the feedback section has been discussed below.
  • The CircleImage component has been used for displaying the avatar, which takes name as a required property and src as the link of the image, if present. Following function returns props to the CircleImage component.

import md5 from 'md5';
import { urls } from './';

// urls.GRAVATAR_URL = ‘https://www.gravatar.com/avatar’;

let getAvatarProps = emailId => {
  const emailHash = md5(emailId);
  const GRAVATAR_IMAGE_URL = `${urls.GRAVATAR_URL}/${emailHash}.jpg`;
  const avatarProps = {
    name: emailId.toUpperCase(),
    src: GRAVATAR_IMAGE_URL,
  };
  return avatarProps;
};

export default getAvatarProps;

 

  • Then pass the returned props on the CircleImage component and set it as the leftAvatar property of the feedback comments ListItem. Following is the snippet –

….
<ListItem
  key={index}
  leftAvatar={<CircleImage {...avatarProps} size="40" />}
  primaryText={
    <div>
      <div>{`${data.email.slice(
        0,
        data.email.indexOf('@') + 1,
      )}...`}</div>
      <div className="feedback-timestamp">
        {this.formatDate(parseDate(data.timestamp))}
      </div>
    </div>
  }
  secondaryText={<p>{data.feedback}</p>}
/>
….
.
.

 

  • This displays the avatar of the user on the UI. The UI changes have been shown below :

References

Continue ReadingDisplaying Avatar Image of Users using Gravatar on SUSI.AI

Overriding the Basic File Attributes while Skill Creation/Editing on Server

In this blog post, we are going to understand the method for overriding basic file attributes while Skill creation/editing on SUSI Server. The need for this arose, when the creationTime for the Skill file that is stored on the server gets changed when the skill was edited.

Need for the implementation

As briefly explained above, the creationTime for the Skill file that is stored on the server gets changed when the skill is edited. Also, the need to override the lastModifiedTime was done, so that the Skills based on metrics gives correct results. Currently, we have two metrics for the SUSI Skills – Recently updated skills and Newest Skills. The former is determined by the lastModifiedTime and the later is determined by the creationTime. Due, to inconsistencies of these attributes, the skills that were shown were out of order. The lastModifiedTime was overridden to save the epoch date during Skill creation, so that the newly created skills don’t show up on the Recently Updated Skills section, whereas the creationTime was overridden to maintain the correct the time.

Going through the implementation

Let us first have a look on how the creationTime was overridden in the ModifySkillService.java file.

.
BasicFileAttributes attr = null;
Path p = Paths.get(skill.getPath());
try {
    attr = Files.readAttributes(p, BasicFileAttributes.class);
} catch (IOException e) {
    e.printStackTrace();
}
FileTime skillCreationTime = null;
if( attr != null ) {
    skillCreationTime = attr.creationTime();
}

if (model_name.equals(modified_model_name) &&
    group_name.equals(modified_group_name) &&
    language_name.equals(modified_language_name) &&
    skill_name.equals(modified_skill_name)) {
    // Writing to File
    try (FileWriter file = new FileWriter(skill)) {
        file.write(content);
        json.put("message", "Skill updated");
        json.put("accepted", true);

    } catch (IOException e) {
        e.printStackTrace();
        json.put("message", "error: " + e.getMessage());
    }
    // Keep the creation time same as previous
    if(attr!=null) {
        try {
            Files.setAttribute(p, "creationTime", skillCreationTime);
        } catch (IOException e) {
            System.err.println("Cannot persist the creation time. " + e);
        }
    }.
}
.
.
.

 

  • Firstly, we get the BasicFileAttributes of the Skill file and store it in the attr variable.
  • Next, we initialise the variable skillCreationTime of type FileTime to null and set the value to the existing creationTime.
  • The new Skill file is saved on the path using the FileWriter instance, which changes the creationTime, lastModifiedTime to the time of editing of the skill.
  • The above behaviour is not desired and hence, we want to override the creationTIme with the FileTime saved in skillCreationTIme. This ensures that the creation time of the skill is persisted, even after editing the skill.
  • Now we are going to see how the lastModifiedTime was overridden in the CreateSkillService.java file.

.
Path newPath = Paths.get(path);
// Override modified date to an older date so that the recently updated metrics works fine
// Set is to the epoch time
try {
  Files.setAttribute(newPath, "lastModifiedTime", FileTime.fromMillis(0));
} catch (IOException e) {
  System.err.println("Cannot override the modified time. " + e);
}
.
.
.

 

  • For this, we get the newPath of the Skill file and then the lastModifiedTime for the Skill File is explicitly set to a particular time.
  • We set it to FileTime.fromMillis(0) i.e, the epoch time.

I hope that I was able to convey my learnings and implementation of the code properly and it proved to be helpful for your understanding.

Resources

Documentation for BasicFileAttributes Interface – https://docs.oracle.com/javase/8/docs/api/java/nio/file/attribute/BasicFileAttributes.html

Continue ReadingOverriding the Basic File Attributes while Skill Creation/Editing on Server

Change Role of User in SUSI.AI Admin section

In this blog post, we are going to implement the functionality to change role of an user from the Admin section of Skills CMS Web-app. The SUSI Server has multiple user role levels with different access levels and functions. We will see how to facilitate the change in roles.

The UI interacts with the back-end server via the following API –

  • Endpoint URL –  https://api.susi.ai/cms/getSkillFeedback.json
  • The minimal user role for hitting the API is ADMIN
  • It takes 4 parameters –
    • user – The email of the user.
    • role – The new role of the user. It can take only selected values that are accepted by the server and whose roles have been defined by the server. They are – USER, REVIEWER, OPERATOR, ADMIN, SUPERADMIN.
    • access_token –  The access token of the user who is making the request

Implementation on the CMS Admin

  • Firstly, a dialog box containing a drop-down was added in the Admin section which contains a list of possible User roles. The dialog box is shown when Edit is clicked, present in each row of the User table.
  • The UI of the dialog box is as follows –

  • The implementation of the UI is done as follows –

….
<Dialog
  title="Change User Role"
  actions={actions} // Contains 2 buttons for Change and Cancel
  modal={true}
  open={this.state.showEditDialog}
>
  <div>
    Select new User Role for
    <span style={{ fontWeight: 'bold', marginLeft: '5px' }}>
      {this.state.userEmail}
    </span>
  </div>
  <div>
    <DropDownMenu
      selectedMenuItemStyle={blueThemeColor}
      onChange={this.handleUserRoleChange}
      value={this.state.userRole}
      autoWidth={false}
    >
      <MenuItem
        primaryText="USER"
        value="user"
        className="setting-item"
      />
      /*
        Similarly for REVIEWER, OPERATOR, ADMIN, SUPERADMIN
        Add Menu Items
     */ 
    </DropDownMenu>
  </div>
</Dialog>
….
.
.
.

 

  • In the above UI immplementation the Material-UI compoenents namely Dialog, DropDownMenu, MenuItems, FlatButton is used.
  • When the Drop down value is changed the handleUserRoleChange function is executed. The function changes the value of the state variables and the definition is as follows –
handleUserRoleChange = (event, index, value) => {
  this.setState({
      userRole: value,
  });
};

 

  • Once, the correct user role to be changed has been selected, the on click handlers for the action button comes into picture.
  • The handler for clicking the Cancel button simply closes the dialog box, whereas the handler for the Change button, makes an API call that changes the user role on the Server.
  • The click handlers for both the buttons is as follows –

// Handler for ‘Change’ button
onChange = () => {
  let url =
   `${urls.API_URL}/aaa/changeRoles.json?user=${this.state.userEmail}&
   role=${this.state.userRole}&access_token=${cookies.get('loggedIn')}`;
  let self = this;
  $.ajax({
    url: url,
    dataType: 'jsonp',
    crossDomain: true,
    timeout: 3000,
    async: false,
    success: function(response) {
      self.setState({ changeRoleDialog: true });
    },
    error: function(errorThrown) {
      console.log(errorThrown);
    },
  });
  this.handleClose();
};

// Handler for ‘Cancel’ button
handleClose = () => {
  this.setState({
    showEditDialog: false,
  });
};
  • In the first function above, the URL endpoint is hit, and on success the Success Dialog is shown and the previous dialog is hidden.
  • In the second function above, only the Dialog box is hidden.
  • The cross domain in the AJAX call is kept as true to enable API usage from multiple domain names and the data type set as jsonp also deals with the same issue.

Resources

Continue ReadingChange Role of User in SUSI.AI Admin section

Removing Google Places from FDroid Flavor in Orga App

In the Open Event Orga App one of the libraries used was the Google Places by the Play Services. According to the FDroid inclusion policy, proprietary software such as Google Play Services cannot be included in the project and hence it needs to be removed or another alternatives needs to be found. Following steps were taken to remove the Places API and make sure that it is used only in the playStore version of the app.

Steps

  • Initially we will change the implementation in build.gradle file to playStoreImplementation to make sure that Places API is used in the playStore version.
playStoreImplementation “com.google.android.gms:play-services-places:${versions.play_services}”
  • A class LocationPicker.java is created in the fdroid and playStore directory. In the fdroid directory of the project we need to make sure that there is no implementation for the method launchPicker. Following is the code for this class. There is a method name getPlaces which has the following parameters : Activity and the Intent. It will return the Location object where we pass the the demo values for latitude and longitude and null value for the address string.
public class LocationPicker {

  private final double DEMO_VALUE = 1;

  public boolean launchPicker(Activity activity) {
      //do nothing
      return false;
  }

  @SuppressLint(“RestrictedApi”)
  public Location getPlace(Activity activity, Intent data) {
      return new Location(DEMO_VALUE, DEMO_VALUE, null);
  }

  public boolean shouldShowLocationLayout() {
      return true;
  }
}
  • Now we make the following class Location which will receive parameters from fdroid as well as playStore version. We will include this class in the Create package of events so that it can be shared by both LoactionPicker class of fdroid as well as playstore. The location class will take in 3 parameters i.e latitude, longitude and address. Its a normal POJO class.
public class Location {

  private double latitude;
  private double longitude;
  private CharSequence address;

  public Location(double latitude, double longitude, CharSequence address) {
      this.latitude = latitude;
      this.longitude = longitude;
      this.address = address;
  }

  public double getLatitude() {
      return latitude;
  }

  public double getLongitude() {
      return longitude;
  }

  public CharSequence getAddress() {
      return address;
  }

}
  • Now we need to implement the LocationPicker.java class in playStore directory so we need to implement the Google Places API in this particular class. Following is the implementation of the launchPicker class. We will create an instance of the googleApiAvailabilty and pass the activity context through it. If the places API is present and the connection is successful, new intent is made from where the place is selected.

We include the intent statement in the try block and catch the exceptions in the the catch block.

 

public boolean launchPicker(Activity activity) {
  int errorCode = googleApiAvailabilityInstance.isGooglePlayServicesAvailable(activity);

  if (errorCode == ConnectionResult.SUCCESS) {
      //SUCCESS
      PlacePicker.IntentBuilder builder = new PlacePicker.IntentBuilder();
      try {
          activity.startActivityForResult(builder.build(activity), PLACE_PICKER_REQUEST);
          return true;
      } catch (GooglePlayServicesRepairableException e) {
          Timber.d(e, “GooglePlayServicesRepairable”);
      } catch (GooglePlayServicesNotAvailableException e) {
          Timber.d(“GooglePlayServices NotAvailable => Updating or Unauthentic”);
      }
  }
  return false;
};
  • Finally we modify the code in the EventCreateDetails class as well.

Resources:

  1. Official fdroid website https://f-droid.org/en/
  2. Places API Official documentation https://developers.google.com/places/web-service/autocomplete
Continue ReadingRemoving Google Places from FDroid Flavor in Orga App

Adding Support for Playing Youtube Videos in SUSI iOS App

SUSI supports very exciting features in chat screen, from simple answer type to complex map, RSS, table etc type responses. Even user can ask SUSI for the image of anything and SUSI response with the image in the chat screen. What if we can play the youtube video from SUSI, we ask SUSI for playing videos and it can play youtube videos, isn’t it be exciting? Yes, SUSI can play youtube videos too. All the SUSI clients (iOS, Android, and Web) support playing youtube videos in chat.

Google provides a Youtube iFrame Player API that can be used to play videos inside the app only instead of passing an intent and playing the videos in the youtube app. iFrame API provide support for playing youtube videos in iOS applications.

In this post, we will see how playing youtube video features implemented in SUSI iOS.

Getting response from server side –

When we ask SUSI for playing any video, in response, we get youtube Video ID in video_play action type. SUSI iOS make use of Video ID to play youtube video. In response below, you can see that we are getting answer action type and in the expression of answer action type, we get the title of the video.

actions:
[
{
type: "answer",
expression: "Playing Kygo - Firestone (Official Video) ft. Conrad Sewell"
},
{
identifier: "9Sc-ir2UwGU",
identifier_type: "youtube",
type: "video_play"
}
]

Integrating youtube player in the app –

We have a VideoPlayerView that handle all the iFrame API methods and player events with help of YTPlayer HTML file.

When SUSI respond with video_play action, the first step is to register the YouTubePlayerCell and present the cell in collectionView of chat screen.

Registering the Cell –

register(_:forCellWithReuseIdentifier:) method registers a class for use in creating new collection view cells.

collectionView?.register(YouTubePlayerCell.self, forCellWithReuseIdentifier: ControllerConstants.youtubePlayerCell)

 

Presenting the YouTubePlayerCell –

Here we are presenting the cell in chat screen using cellForItemAt method of UICollectionView.

if message.actionType == ActionType.video_play.rawValue {
if let cell = collectionView.dequeueReusableCell(withReuseIdentifier: ControllerConstants.youtubePlayerCell, for: indexPath) as? YouTubePlayerCell {
cell.message = message
cell.delegate = self
return cell
}
}

 

Setting size for cell –

Using sizeForItemAt method of UICollectionView to set the size.

if message.actionType == ActionType.video_play.rawValue {
return CGSize(width: view.frame.width, height: 158)
}

In YouTubePlayerCell, we are displaying the thumbnail of youtube video using UIImageView. Following method is using to get the thumbnail of particular video by using Video ID –

  1. Getting thumbnail image from URL
  2. Setting image to imageView
func downloadThumbnail() {
if let videoID = message?.videoData?.identifier {
let thumbnailURLString = "https://img.youtube.com/vi/\(videoID)/default.jpg"
let thumbnailURL = URL(string: thumbnailURLString)
thumbnailView.kf.setImage(with: thumbnailURL, placeholder: ControllerConstants.Images.placeholder, options: nil, progressBlock: nil, completionHandler: nil)
}
}

We are adding a play button in the center of thumbnail view so that when the user clicks play button, we can present player.

On clicking the Play button, we are presenting the PlayerViewController, which hold all the player setups, by overFullScreen type of modalPresentationStyle.

@objc func playVideo() {
if let videoID = message?.videoData?.identifier {
let playerVC = PlayerViewController(videoID: videoID)
playerVC.modalPresentationStyle = .overFullScreen
delegate?.loadNewScreen(controller: playerVC)
}
}

The methods above present the youtube player with giving Video ID. We are using YouTubePlayerDelegate method to autoplay the video.

func playerReady(_ videoPlayer: YouTubePlayerView) {
videoPlayer.play()
}

The player can be dismissed by tapping on the light black background.

Final Output –

Resources –

  1. Youtube iOS Player API
  2. SUSI API Sample Response for Playing Video
  3. SUSI iOS Link
Continue ReadingAdding Support for Playing Youtube Videos in SUSI iOS App

Implementing Scheduler Actions on Open Event Frontend

After the functionality to display scheduled sessions was added to Open Event Frontend, the read-only implementation of the scheduler had been completed. What was remaining now in the scheduler were the write actions, i.e., the sessions’ scheduling which event organizers do by deciding its timings, duration and venue.

First of all, these actions required the editable flag to be true for the fullcalendar plugin. This allowed the sessions displayed to be dragged and dropped. Once this was enabled, the next task was to embed data in each of the unscheduled sessions so that when they get dropped on the fullcalendar space, they get recognized by the calendar, which can place it at the appropriate location. For this functionality, they had to be jQuery UI draggables and contain an “event” data within them. This was accomplished by the following code:

this.$().draggable({
  zIndex         : 999,
  revert         : true,      // will cause the event to go back to its
  revertDuration : 0  //  original position after the drag
});

this.$().data('event', {
  title    : this.$().text().replace(/\s\s+/g, ' '), // use the element's text as the event title
  id       : this.$().attr('id'),
  serverId : this.get('session.id'),
  stick    : true, // maintain when user navigates (see docs on the renderEvent method)
  color    : this.get('session.track.color')
});

Here, “this” refers to each unscheduled session. Note that the session color is fetched via the corresponding session track. Once the unscheduled sessions contain enough relevant data and are of the right type (i.e, jQuery UI draggable type), they’re ready to be dropped on the fullcalendar space.

Now, when an unscheduled session is dropped on the fullcalendar space, fullcalendar’s eventReceive callback is triggered after its drop callback. In this callback, the code removes the session data from the unscheduled sessions’ list, so it disappears from there and gets stuck to the fullcalendar space. Then the code in the drop callback makes a PATCH request to Open Event Server with the relevant data, i.e, start and end times as well as microlocation. This updates the corresponding session on the server.

Similarly, another callback is generated when an event is resized, which means when its duration is changed. This again sends a corresponding session PATCH request to the server. Furthermore, the functionality to pop a scheduled event out of the calendar and add it back to the unscheduled sessions’ list is also implemented, just like in Eventyay version 1. For this, a cross button is implemented, which is embedded in each scheduled session. Clicking this pops the session out of the calendar and adds it back to the unscheduled sessions list. Again, a corresponding PATCH request is sent to the server.

After getting the response of such requests, a notification is displayed on the screen, which informs the users whether the action was successful or not. The main PATCH functionality is in a separate function which is called by different callbacks accordingly, so code reusability is increased:

updateSession(start, end, microlocationId, sessionId) {
    let payload = {
      data: {
        attributes: {
          'starts-at' : start ? start.toISOString() : null,
          'ends-at'   : end ? end.toISOString() : null
        },
        relationships: {
          microlocation: {
            data: {
              type : 'microlocation',
              id   : microlocationId
            }
          }
        },
        type : 'session',
        id   : sessionId
      }
    };

    let config = {
      skipDataTransform: true
    };
    return this.get('loader')
      .patch(`sessions/${sessionId}`, JSON.stringify(payload), config)
      .then(() => {
        this.get('notify').success('Changes have been made successfully');
      })
      .catch(reason => {
        this.set('error', reason);
        this.get('notify').error(`Error: ${reason}`);
      });
  },

This completes the scheduler implementation on Open Event Frontend. Here is how it looks in action:

scheduler actions.gif

Resources

Continue ReadingImplementing Scheduler Actions on Open Event Frontend

Open Event Server – Export Speakers as PDF File

FOSSASIA‘s Open Event Server is the REST API backend for the event management platform, Open Event. Here, the event organizers can create their events, add tickets for it and manage all aspects from the schedule to the speakers. Also, once he/she makes his event public, others can view it and buy tickets if interested.

The organizer can see all the speakers in a very detailed view in the event management dashboard. He can see the statuses of all the speakers. The possible statuses are pending, accepted, and rejected. He/she can take actions such as editing the speakers.

If the organizer wants to download the list of all the speakers as a PDF file, he or she can do it very easily by simply clicking on the Export As PDF button in the top right-hand corner.

Let us see how this is done on the server.

Server side – generating the Speakers PDF file

Here we will be using the pisa package which is used to convert from HTML to PDF. It is a html2pdf converter which uses ReportLab Toolkit, the HTML5lib and pyPdf. It supports HTML5 and CSS 2.1 (and some of CSS 3). It is completely written in pure Python so it is platform independent.

from xhtml2pdf import pisa<

We have a utility method create_save_pdf which creates and saves PDFs from HTML. It takes the following arguments:

  • pdf_data – This contains the HTML template which has to be converted to PDF.
  • key – This contains the file name
  • dir_path – This contains the directory

It returns the newly formed PDF file. The code is as follows:

def create_save_pdf(pdf_data, key, dir_path='/static/uploads/pdf/temp/'):
   filedir = current_app.config.get('BASE_DIR') + dir_path

   if not os.path.isdir(filedir):
       os.makedirs(filedir)

   filename = get_file_name() + '.pdf'
   dest = filedir + filename

   file = open(dest, "wb")
   pisa.CreatePDF(io.BytesIO(pdf_data.encode('utf-8')), file)
   file.close()

   uploaded_file = UploadedFile(dest, filename)
   upload_path = key.format(identifier=get_file_name())
   new_file = upload(uploaded_file, upload_path)
   # Removing old file created
   os.remove(dest)

   return new_file

The HTML file is formed using the render_template method of flask. This method takes the HTML template and its required variables as the arguments. In our case, we pass in ‘pdf/speakers_pdf.html’(template) and speakers. Here, speakers is the list of speakers to be included in the PDF file. In the template, we loop through each item of speakers. We print his name, email, list of its sessions, mobile, a short biography, organization, and position. All these fields form a row in the table. Hence, each speaker is a row in our PDF file.

The various columns are as follows:

<thead>
<tr>
   <th>
       {{ ("Name") }}
   </th>
   <th>
       {{ ("Email") }}
   </th>
   <th>
       {{ ("Sessions") }}
   </th>
   <th>
       {{ ("Mobile") }}
   </th>
   <th>
       {{ ("Short Biography") }}
   </th>
   <th>
       {{ ("Organisation") }}
   </th>
   <th>
       {{ ("Position") }}
   </th>
</tr>
</thead>

A snippet of the code which handles iterating over the speakers’ list and forming a row is as follows:

{% for speaker in speakers %}
   <tr class="padded" style="text-align:center; margin-top: 5px">
       <td>
           {% if speaker.name %}
               {{ speaker.name }}
           {% else %}
               {{ "-" }}
           {% endif %}
       </td>
       <td>
           {% if speaker.email %}
               {{ speaker.email }}
           {% else %}
               {{ "-" }}
           {% endif %}
       </td>
       <td>
           {% if speaker.sessions %}
               {% for session in speaker.sessions %}
                   {{ session.name }}<br>
               {% endfor %}
           {% else %}
               {{ "-" }}
           {% endif %}
       </td>
      …. So on
   </tr>
{% endfor %}

The full template can be found here.

Obtaining the Speakers PDF file:

Firstly, we have an API endpoint which starts the task on the server.

GET - /v1/events/{event_identifier}/export/speakers/pdf

Here, event_identifier is the unique ID of the event. This endpoint starts a celery task on the server to export the speakers of the event as a PDF file. It returns the URL of the task to get the status of the export task. A sample response is as follows:

{
  "task_url": "/v1/tasks/b7ca7088-876e-4c29-a0ee-b8029a64849a"
}

The user can go to the above-returned URL and check the status of his/her Celery task. If the task completed successfully he/she will get the download URL. The endpoint to check the status of the task is:

and the corresponding response from the server –

{
  "result": {
    "download_url": "/v1/events/1/exports/http://localhost/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
  },
  "state": "SUCCESS"
}

The file can be downloaded from the above-mentioned URL.

Resources

Continue ReadingOpen Event Server – Export Speakers as PDF File

Correct the API for Downloading GitHub Content in SUSI.AI android app

The content from github in the SUSI.AI android app is downloaded through simple links and the data is parsed through them and is used in the app depending upon what needs to be done with that data at that time.

A simple example for this that was used in the app was  :

private val imageLink = “https://raw.githubusercontent.com/fossasia/susi_skill_data/master/models/general/”

Above is the link that is used to download and display the images of the skills in the app. All the api calls that generally take place in SUSI are through the SUSI server, and making the call to display the images for the skills which takes place through the github links should be replaced by making the calls to the SUSI server instead, as this is a terrible programming style, with this style the project cannot be cloned from other developers and it cannot be moved to other repositories.

We see that there was a huge programming style issue present in the android app and hence, it was fixed by adding the API that calls the SUSI server for external source images and removing the existing implementation that downloads the image from github directly.

Below is an example of the link to the API call made in the app that was needed for the request to be made to the SUSI server :

${BaseUrl.SUSI_DEFAULT_BASE_URL}/cms/getImage.png?model=${skillData.model}&language=${skillData.language}&group=${skillData.group}&image=${skillData.image}

The link is displayed in the kotlin string interpolation manner, here is what the actual URL would look like :

https://api.susi.ai/cms/getImage.pngmodel=${skillData.model}&language=${skillData.language}&group=${skillData.group}&image=${skillData.image}

Here the values with ‘$’ symbol are the parameters for the API taken from the SkillData.kt file and are put inside the link so that the image needed can be extracted.

Now, since we use this link to set the images, to avoid the duplicate code, an Object class was made for this purpose. The object class contained two functions, one for setting the image and one for parsing the skilldata object and forming a URL out of it. Here is the code for the object class :

object Utils {

  fun setSkillsImage(skillData: SkillData, imageView: ImageView) {
      Picasso.with(imageView.context)
              .load(getImageLink(skillData))
              .error(R.drawable.ic_susi)
              .fit()
              .centerCrop()
              .into(imageView)
  }

  fun getImageLink(skillData: SkillData): String {
      val link = “${BaseUrl.SUSI_DEFAULT_BASE_URL}/cms/getImage.png?model=${skillData.model}&language=${skillData.language}&group=${skillData.group}&image=${skillData.image}”
              .replace(” “,“%20”)
      Timber.d(“SUSI URI” + link)
      return link
  }
}

setSkillsImage() method sets the image in the ImageView and the getImageLink() method returns the image formed out of the SkillData object.

References

 

Continue ReadingCorrect the API for Downloading GitHub Content in SUSI.AI android app

Implementing Scheduled Sessions in Open Event Scheduler

Until recently, the Open Event Frontend version 2 didn’t have the functionality to display the already scheduled sessions of an event on the sessions scheduler. Displaying the already scheduled sessions is important so that the event organizer can always use the sessions scheduler as a draft and not worry about losing progress or data about scheduled sessions’ timings. Therefore, just like a list of unscheduled sessions was implemented for the scheduler, the provision for displaying scheduled sessions also had to be implemented.

The first step towards implementing this was to fetch the scheduled sessions’ details from Open Event Server. To perform this fetch, an appropriate filter was required. This filter should ideally ask the server to send only those sessions that are “scheduled”. Thus, scheduled sessions need to be defined as sessions which have a non-null value of its starts-at and ends-at fields. Also, few more details are required to be fetched for a clean display of scheduled sessions. First, the sessions’ speaker details should be included so that the speakers’ names can be displayed alongside the sessions. Also, the microlocations’ details need to be included so that each session is displayed according to its microlocation. For example, if a session is to be delivered in a place named ‘Lecture Hall A’, it should appear under the ‘Lecture Hall A’ microlocation column. Therefore, the filter goes as follows:

let scheduledFilterOptions = [
      {
        and: [
          {
            name : 'starts-at',
            op   : 'ne',
            val  : null
          },
          {
            name : 'ends-at',
            op   : 'ne',
            val  : null
          }
        ]
      }
    ];

 

After fetching the scheduled sessions’ details, they need to be delivered to the fulllcalendar code for displaying on the session scheduler. For that, the sessions need to be converted in a format which can be parsed by the fullcalendar add-on of emberJS. For example, fullcalendar calls microlocations as ‘resources’. Here is the format which fullcalendar understands:

{
        title      : `${session.title} | ${speakerNames.join(', ')}`,
        start      : session.startsAt.format('YYYY-MM-DDTHH:mm:SS'),
        end        : session.endsAt.format('YYYY-MM-DDTHH:mm:SS'),
        resourceId : session.microlocation.get('id'),
        color      : session.track.get('color'),
        serverId   : session.get('id') // id of the session on BE
}

 

Once the sessions are in the appropriate format, their data is sent to the fullcalendar template, which renders them on the screen:

Screen Shot 2018-08-21 at 8.20.27 PM.png

This completes the implementation of displaying the scheduled sessions of an event on the Open Event Scheduler.

Resources

Continue ReadingImplementing Scheduled Sessions in Open Event Scheduler

Building SUSI.AI Android App with FDroid

Fdroid is an app store for Free and Open Source Software (FOSS). Building and hosting an app on Fdroid is not an easy process compared to when we host one on Google Play. A certain set of build checks are required to be done prior to making a merge request (which is similar to a pull request in GitHub) in the fdroid-data GitLab repository. SUSI.AI Android app has undergone through all these checks and tests and is now ready for the merge request to be made.

Setting up the fdroid-server and fdroid-data repositories is a separate thing and is fairly easy. Building the app using the tools provided by fdroid is another thing and is the one that causes the most problems. It will involve quite a few steps to get started. Fdroid requires all the apps need to be built using:

$ fdroid build -v -l ai.susi

This will output a set of logs which tell us what went wrong in the builds. The usual one in a first time app is obviously the build is not taking place at all. The reason is our metadata file needs to be changed to initiate a build.

The metadata file is used for the build process and contains all the information about the app. The metadata file for a.susi package was a .yaml file.

Builds:

 – versionName: 1.0.10

   versionCode: 11

   commit: 1ad2fd0e858b1256617e652c6c8ce1b8372473e6

   subdir: app

   gradle:

     – fdroid

This is the metadata reference file’s build section that is used for the build process using the command that was mentioned above.The versionName a nd versionCode is found in the build.gradle file in the app and commit denotes the commit-id of the latest commit that will be checked out and built, subdir shows the subdirectory of the app, here the subdirectory is the app file.

Next is the interesting stuff, since we are using flavors in the app, we have to mention in the gradle the flavor which we are using, in our case we are using the flavor by the name of “fdroid” and by mentioning this we can build only the “fdroid” flavor in the app.

Also when building the app there were many blockers that were faced, the reason for the usual build fails were :

1 actionable task: 1 executed
INFO: Scanning source for common problems…
ERROR: Found usual suspect ‘youtube.*android.*player.*api’ at app/libs/YouTubeAndroidPlayerApi.jar
WARNING: Found JAR file at app/libs/YouTubeAndroidPlayerApi.jar
WARNING: Found possible binary at app/src/main/assets/snowboy/alexa_02092017.umdl
WARNING: Found possible binary at app/src/main/assets/snowboy/common.res
ERROR: Found shared library at app/src/main/jniLibs/arm64-v8a/libsnowboy-detect-android.so
ERROR: Found shared library at app/src/main/jniLibs/armeabi-v7a/libsnowboy-detect-android.so
INFO: Removing gradle-wrapper.jar at gradle/wrapper/gradle-wrapper.jar
ERROR: Could not build app ai.susi: Can‘t build due to 3 errors while scanning
INFO: Finished
INFO: 1 build failed

The reason for these build fails were that fdroid does not allow us to use prebuilt files and any proprietary software if present, the above log indicates the two prebuilt files which should be removed and also the YouTubeAndroidPlayerApi.jar which is proprietary software and hence needs to removed. So, to remove the files that are not used in the fdroid flavor and exclude them in the build process, we have to include the following statements in the build section of the metadata reference file :

   rm:
     – app/src/main/jniLibs/arm64-v8a/libsnowboy-detect-android.so
     – app/src/main/jniLibs/armeabi-v7a/libsnowboy-detect-android.so
     – app/libs/YouTubeAndroidPlayerApi.jar

Once the metadata file is complete we are ready to run the build command once again. If you have properly set the environment in your local PC, build will end successfully assuming there were no Java or any other language syntax errors.

It is worth to mention few other facts which are common to Android software projects. Usually the source code is packed in a folder named “app” inside the repository and this is the common scenario if Android Studio builds up the project from scratch. If this “app” folder is one level below the root, that is “android/app”, the build instructions shown above will throw an error as it cannot find the project files.

The reason for this is as it is mentioned “subdir=app” in the metadata file. Change this to “subdir=android/app” and run the build again. The idea is to direct the build to find where the project files are.

Reference:

  1. Metadata : https://f-droid.org/docs/Build_Metadata_Reference/#Build
  2. Publish an app on fdroid: https://blog.fossasia.org/publish-an-open-source-app-on-fdroid/
Continue ReadingBuilding SUSI.AI Android App with FDroid