Handling Android Runtime permissions in UI Tests in SUSI.AI Android

With the introduction of Marshmallow (API Level 23), in SUSI.AI it was needed to ensure that:

  • It was verified if we had the permission that was needed, when required
  • The user was requested to grant permission, when it deemed appropriate
  • The request (empty states or data feedback) was correctly handled within the UI to represent the outcome of being granted or denied the required permission

You might have written UI tests. What about instances where the app needs the user’s permissions, like allow the app to access contacts on the device, for the tests to run? Would the tests pass when the test is run on Android 6.0+ devices?
And, can Espresso be used to achieve this? Unfortunately, Espresso doesn’t have the ability to access components from outside of the application package. So, how to handle this?
There are two approaches to handle this :
1) Using the UI Automator
2) Using the GrantPermissionRule

Let us have a look at both of these approaches in detail :

Using UI Automator to Handle Runtime Permissions on Android for UI Tests :

UI Automator is a UI testing framework suitable for cross-app functional UI testing across system and installed apps. This framework requires Android 4.3 (API level 18) or higher.
The UI Automator testing framework provides a set of APIs to build UI tests that perform interactions on user apps and system apps. The UI Automator APIs allows you to perform operations such as opening the Settings menu or the app launcher in a test device. This testing framework is well-suited for writing black box-style automated tests, where the test code does not rely on internal implementation details of the target app.

The key features of this testing framework include the following :

  • A viewer to inspect layout hierarchy. For more information, see UI Automator Viewer.
  • An API to retrieve state information and perform operations on the target device. For more information, see Accessing device stateAPIs that support cross-app UI testing. For more information, see UI Automator APIs.
  • Unlike Espresso, UIAutomator can interact with system applications which means that you’ll be able to interact with the permissions dialog, if needed.

    So, how to do this? Well, if you want to grant a permission in a UI test then you need to find the corresponding UiObject that you wish to click on. In our case, the permissions dialog box is the UiObject. This object is a representation of a view – it is not bound to the view but contains information to locate the matching view at runtime, based on the properties of the UiSelector instance within it’s constructor. A UiSelector instance is an object that declares elements to be targeted by the UI test within the layout. You can set various properties such as a text value, class name or content-description, for this UiSelector instance.
    So, once you have your UiObject (the permissions dialog), you can determine which option you want to select and then use click( ) method to grant/deny permission access.

fun allowPermissionsIfNeeded() {
         iIf (BUILD.VERSION.SDK_INT >= 23){
                   var allowPermissions : UiObject = mDevice
                                       .findObject(UiSelector( ).text("ALLOW"))
                   if(allowPermissions.exists( )) {
                             try {
                                       allowPermissions.click( )
                             } catch (e : UiObjectNotFoundException) {
                                       Log.e(TAG, "There is no permission dialog to interact with", e)
                             }
                   } 
         }
}

Similarly, you can also handle “DENY” approach.

So, this is how you can use UI Automator to handle Android runtime permissions for UI testing. Now, let us have a look at the other and a newer approach :

Using GrantPermissionRule to Handle Runtime Permissions on Android for UI Tests :

GrantPermissionRule is used to grant runtime permissions to avoid the permission dialog from showing up and blocking the UI of the app. In this approach, permissions can only be requested for API level 23 (Android M) or higher. All you need to do is to add the following rule to your UI test :

@Rule 
public GrantPermissionRule mRuntimePermissionRule =     
           GrantPermissionRule.grant(android.Manifest
.permission.ACCESS_FINE_LOCATION);

ACCESS_FINE_LOCATION (in the above code) can be replaced by any other permission that your app requires.

This would be also be implemented in the SUSI.AI Android app for UI tests. Unit tests and UI tests form an integral part of a good software. Hence, you need to write quality tests for your projects to detect and fix bugs and flaws easily and conveniently.

Resources

 

Continue ReadingHandling Android Runtime permissions in UI Tests in SUSI.AI Android

Upload Avatar for a User in SUSI.AI Server

In this blog post, we are going to discuss on how the feature to upload the avatar for a user was implemented on the SUSI.AI Server. The API endpoint by which a user can upload his/her avatar image is https://api.susi.ai/aaa/uploadAvatar.json.

  • The endpoint is of POST type.
  • It accepts two request parameters –
  • image – It contains the entire image file sent from the client
  • access_token – It is the access_token for the user

The minimalUserRole is set to USER for this API, as only logged-in users can use this API.

Going through the API development

  • The image and access_token parameters are first extracted via the req object, that is passed to the main function. The  parameters are then stored in variables.
  • There is a check if the access_token and image exists. It it doesn’t, an error is thrown.
  • This code snippet discusses the above two points –

protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
    Part imagePart = req.getPart("image");
    if (req.getParameter("access_token") != null) {
        if (imagePart == null) {
            result.put("accepted", false);
            result.put("message", "Image file not received");
        } else {.
        }
    else{
        result.put("message","Access token are not given");
        result.put("accepted",false);
        resp.setContentType("application/json");
        resp.setCharacterEncoding("UTF-8");
        resp.getWriter().write(result.toString());
    }
}

 

  • Then the input stream is extracted from the imagePart and stored. And post that the identity is checked if it is valid.
  • The input stream is converted into the Image type using the ImageIO.read method.
  • The image is eventually converted into a BufferedImage using a function, described below.

public static BufferedImage toBufferedImage(Image img)
{
    if (img instanceof BufferedImage)
        return (BufferedImage) img;

    // Create a buffered image with transparency
    BufferedImage bimage = new BufferedImage(img.getWidth(null),     
    img.getHeight(null), BufferedImage.TYPE_INT_ARGB);
    // Draw the image on to the buffered image
    Graphics2D bGr = bimage.createGraphics();
    bGr.drawImage(img, 0, 0, null);
    bGr.dispose();
    // Return the buffered image
    return bimage;
}

 

  • After that, the file path and name is set. The avatar for each user is stored in the /data/avatar_uploads/<uuid of the user>.jpg.
  • The avatar is written to the path using the ImageIO.write function. Once, the file is stored on the server, the success response is sent and the client side receives it.

Resources

Continue ReadingUpload Avatar for a User in SUSI.AI Server

Displaying Avatar Image of Users using Gravatar on SUSI.AI

This blog discusses how the avatar of the user has been shown at different places in the UI like the app bar, feedback comments, etc using the Gravatar service on SUSI.AI. A Gravatar is a Globally Recognized Avatar. Your Gravatar is an image that follows you from site to site appearing beside your name when you do things like comment or post on a blog. So, the Gravatar service has been integrated in SUSI.AI, so that it helps identify the user via the avatar too.

Going through the implementation

  • The aim is to get an avatar of the user from the email id. For that purpose, Gravatar exposes a publicly available avatar of the user, which can be accessed via the following steps :
    • Creating the Hash of the email
    • Sending the image request
  • For creating the MD5 hash of the email, use the npm library md5. The function takes a string as input and returns the hash of the string.
  • Now, a URL is generated using this hash.
  • The URL format is https://www.gravatar.com/avatar/HASH, where ‘HASH’ is the hash of the email of the user. In case, the hash is invalid, Gravatar returns a default avatar image.
  • Also, append ‘.jpg’ to the URL to maintain image format consistency on the website. When, the generated URL is used in an <img> tag, it behaves like an image and an avatar is returned when the URL is requested by the browser.
  • It has been displayed on various instances in the UI like app bar , feedback comments section, etc. The implementation in the feedback section has been discussed below.
  • The CircleImage component has been used for displaying the avatar, which takes name as a required property and src as the link of the image, if present. Following function returns props to the CircleImage component.

import md5 from 'md5';
import { urls } from './';

// urls.GRAVATAR_URL = ‘https://www.gravatar.com/avatar’;

let getAvatarProps = emailId => {
  const emailHash = md5(emailId);
  const GRAVATAR_IMAGE_URL = `${urls.GRAVATAR_URL}/${emailHash}.jpg`;
  const avatarProps = {
    name: emailId.toUpperCase(),
    src: GRAVATAR_IMAGE_URL,
  };
  return avatarProps;
};

export default getAvatarProps;

 

  • Then pass the returned props on the CircleImage component and set it as the leftAvatar property of the feedback comments ListItem. Following is the snippet –

….
<ListItem
  key={index}
  leftAvatar={<CircleImage {...avatarProps} size="40" />}
  primaryText={
    <div>
      <div>{`${data.email.slice(
        0,
        data.email.indexOf('@') + 1,
      )}...`}</div>
      <div className="feedback-timestamp">
        {this.formatDate(parseDate(data.timestamp))}
      </div>
    </div>
  }
  secondaryText={<p>{data.feedback}</p>}
/>
….
.
.

 

  • This displays the avatar of the user on the UI. The UI changes have been shown below :

References

Continue ReadingDisplaying Avatar Image of Users using Gravatar on SUSI.AI

Overriding the Basic File Attributes while Skill Creation/Editing on Server

In this blog post, we are going to understand the method for overriding basic file attributes while Skill creation/editing on SUSI Server. The need for this arose, when the creationTime for the Skill file that is stored on the server gets changed when the skill was edited.

Need for the implementation

As briefly explained above, the creationTime for the Skill file that is stored on the server gets changed when the skill is edited. Also, the need to override the lastModifiedTime was done, so that the Skills based on metrics gives correct results. Currently, we have two metrics for the SUSI Skills – Recently updated skills and Newest Skills. The former is determined by the lastModifiedTime and the later is determined by the creationTime. Due, to inconsistencies of these attributes, the skills that were shown were out of order. The lastModifiedTime was overridden to save the epoch date during Skill creation, so that the newly created skills don’t show up on the Recently Updated Skills section, whereas the creationTime was overridden to maintain the correct the time.

Going through the implementation

Let us first have a look on how the creationTime was overridden in the ModifySkillService.java file.

.
BasicFileAttributes attr = null;
Path p = Paths.get(skill.getPath());
try {
    attr = Files.readAttributes(p, BasicFileAttributes.class);
} catch (IOException e) {
    e.printStackTrace();
}
FileTime skillCreationTime = null;
if( attr != null ) {
    skillCreationTime = attr.creationTime();
}

if (model_name.equals(modified_model_name) &&
    group_name.equals(modified_group_name) &&
    language_name.equals(modified_language_name) &&
    skill_name.equals(modified_skill_name)) {
    // Writing to File
    try (FileWriter file = new FileWriter(skill)) {
        file.write(content);
        json.put("message", "Skill updated");
        json.put("accepted", true);

    } catch (IOException e) {
        e.printStackTrace();
        json.put("message", "error: " + e.getMessage());
    }
    // Keep the creation time same as previous
    if(attr!=null) {
        try {
            Files.setAttribute(p, "creationTime", skillCreationTime);
        } catch (IOException e) {
            System.err.println("Cannot persist the creation time. " + e);
        }
    }.
}
.
.
.

 

  • Firstly, we get the BasicFileAttributes of the Skill file and store it in the attr variable.
  • Next, we initialise the variable skillCreationTime of type FileTime to null and set the value to the existing creationTime.
  • The new Skill file is saved on the path using the FileWriter instance, which changes the creationTime, lastModifiedTime to the time of editing of the skill.
  • The above behaviour is not desired and hence, we want to override the creationTIme with the FileTime saved in skillCreationTIme. This ensures that the creation time of the skill is persisted, even after editing the skill.
  • Now we are going to see how the lastModifiedTime was overridden in the CreateSkillService.java file.

.
Path newPath = Paths.get(path);
// Override modified date to an older date so that the recently updated metrics works fine
// Set is to the epoch time
try {
  Files.setAttribute(newPath, "lastModifiedTime", FileTime.fromMillis(0));
} catch (IOException e) {
  System.err.println("Cannot override the modified time. " + e);
}
.
.
.

 

  • For this, we get the newPath of the Skill file and then the lastModifiedTime for the Skill File is explicitly set to a particular time.
  • We set it to FileTime.fromMillis(0) i.e, the epoch time.

I hope that I was able to convey my learnings and implementation of the code properly and it proved to be helpful for your understanding.

Resources

Documentation for BasicFileAttributes Interface – https://docs.oracle.com/javase/8/docs/api/java/nio/file/attribute/BasicFileAttributes.html

Continue ReadingOverriding the Basic File Attributes while Skill Creation/Editing on Server

Change Role of User in SUSI.AI Admin section

In this blog post, we are going to implement the functionality to change role of an user from the Admin section of Skills CMS Web-app. The SUSI Server has multiple user role levels with different access levels and functions. We will see how to facilitate the change in roles.

The UI interacts with the back-end server via the following API –

  • Endpoint URL –  https://api.susi.ai/cms/getSkillFeedback.json
  • The minimal user role for hitting the API is ADMIN
  • It takes 4 parameters –
    • user – The email of the user.
    • role – The new role of the user. It can take only selected values that are accepted by the server and whose roles have been defined by the server. They are – USER, REVIEWER, OPERATOR, ADMIN, SUPERADMIN.
    • access_token –  The access token of the user who is making the request

Implementation on the CMS Admin

  • Firstly, a dialog box containing a drop-down was added in the Admin section which contains a list of possible User roles. The dialog box is shown when Edit is clicked, present in each row of the User table.
  • The UI of the dialog box is as follows –

  • The implementation of the UI is done as follows –

….
<Dialog
  title="Change User Role"
  actions={actions} // Contains 2 buttons for Change and Cancel
  modal={true}
  open={this.state.showEditDialog}
>
  <div>
    Select new User Role for
    <span style={{ fontWeight: 'bold', marginLeft: '5px' }}>
      {this.state.userEmail}
    </span>
  </div>
  <div>
    <DropDownMenu
      selectedMenuItemStyle={blueThemeColor}
      onChange={this.handleUserRoleChange}
      value={this.state.userRole}
      autoWidth={false}
    >
      <MenuItem
        primaryText="USER"
        value="user"
        className="setting-item"
      />
      /*
        Similarly for REVIEWER, OPERATOR, ADMIN, SUPERADMIN
        Add Menu Items
     */ 
    </DropDownMenu>
  </div>
</Dialog>
….
.
.
.

 

  • In the above UI immplementation the Material-UI compoenents namely Dialog, DropDownMenu, MenuItems, FlatButton is used.
  • When the Drop down value is changed the handleUserRoleChange function is executed. The function changes the value of the state variables and the definition is as follows –
handleUserRoleChange = (event, index, value) => {
  this.setState({
      userRole: value,
  });
};

 

  • Once, the correct user role to be changed has been selected, the on click handlers for the action button comes into picture.
  • The handler for clicking the Cancel button simply closes the dialog box, whereas the handler for the Change button, makes an API call that changes the user role on the Server.
  • The click handlers for both the buttons is as follows –

// Handler for ‘Change’ button
onChange = () => {
  let url =
   `${urls.API_URL}/aaa/changeRoles.json?user=${this.state.userEmail}&
   role=${this.state.userRole}&access_token=${cookies.get('loggedIn')}`;
  let self = this;
  $.ajax({
    url: url,
    dataType: 'jsonp',
    crossDomain: true,
    timeout: 3000,
    async: false,
    success: function(response) {
      self.setState({ changeRoleDialog: true });
    },
    error: function(errorThrown) {
      console.log(errorThrown);
    },
  });
  this.handleClose();
};

// Handler for ‘Cancel’ button
handleClose = () => {
  this.setState({
    showEditDialog: false,
  });
};
  • In the first function above, the URL endpoint is hit, and on success the Success Dialog is shown and the previous dialog is hidden.
  • In the second function above, only the Dialog box is hidden.
  • The cross domain in the AJAX call is kept as true to enable API usage from multiple domain names and the data type set as jsonp also deals with the same issue.

Resources

Continue ReadingChange Role of User in SUSI.AI Admin section

Adding Support for Playing Youtube Videos in SUSI iOS App

SUSI supports very exciting features in chat screen, from simple answer type to complex map, RSS, table etc type responses. Even user can ask SUSI for the image of anything and SUSI response with the image in the chat screen. What if we can play the youtube video from SUSI, we ask SUSI for playing videos and it can play youtube videos, isn’t it be exciting? Yes, SUSI can play youtube videos too. All the SUSI clients (iOS, Android, and Web) support playing youtube videos in chat.

Google provides a Youtube iFrame Player API that can be used to play videos inside the app only instead of passing an intent and playing the videos in the youtube app. iFrame API provide support for playing youtube videos in iOS applications.

In this post, we will see how playing youtube video features implemented in SUSI iOS.

Getting response from server side –

When we ask SUSI for playing any video, in response, we get youtube Video ID in video_play action type. SUSI iOS make use of Video ID to play youtube video. In response below, you can see that we are getting answer action type and in the expression of answer action type, we get the title of the video.

actions:
[
{
type: "answer",
expression: "Playing Kygo - Firestone (Official Video) ft. Conrad Sewell"
},
{
identifier: "9Sc-ir2UwGU",
identifier_type: "youtube",
type: "video_play"
}
]

Integrating youtube player in the app –

We have a VideoPlayerView that handle all the iFrame API methods and player events with help of YTPlayer HTML file.

When SUSI respond with video_play action, the first step is to register the YouTubePlayerCell and present the cell in collectionView of chat screen.

Registering the Cell –

register(_:forCellWithReuseIdentifier:) method registers a class for use in creating new collection view cells.

collectionView?.register(YouTubePlayerCell.self, forCellWithReuseIdentifier: ControllerConstants.youtubePlayerCell)

 

Presenting the YouTubePlayerCell –

Here we are presenting the cell in chat screen using cellForItemAt method of UICollectionView.

if message.actionType == ActionType.video_play.rawValue {
if let cell = collectionView.dequeueReusableCell(withReuseIdentifier: ControllerConstants.youtubePlayerCell, for: indexPath) as? YouTubePlayerCell {
cell.message = message
cell.delegate = self
return cell
}
}

 

Setting size for cell –

Using sizeForItemAt method of UICollectionView to set the size.

if message.actionType == ActionType.video_play.rawValue {
return CGSize(width: view.frame.width, height: 158)
}

In YouTubePlayerCell, we are displaying the thumbnail of youtube video using UIImageView. Following method is using to get the thumbnail of particular video by using Video ID –

  1. Getting thumbnail image from URL
  2. Setting image to imageView
func downloadThumbnail() {
if let videoID = message?.videoData?.identifier {
let thumbnailURLString = "https://img.youtube.com/vi/\(videoID)/default.jpg"
let thumbnailURL = URL(string: thumbnailURLString)
thumbnailView.kf.setImage(with: thumbnailURL, placeholder: ControllerConstants.Images.placeholder, options: nil, progressBlock: nil, completionHandler: nil)
}
}

We are adding a play button in the center of thumbnail view so that when the user clicks play button, we can present player.

On clicking the Play button, we are presenting the PlayerViewController, which hold all the player setups, by overFullScreen type of modalPresentationStyle.

@objc func playVideo() {
if let videoID = message?.videoData?.identifier {
let playerVC = PlayerViewController(videoID: videoID)
playerVC.modalPresentationStyle = .overFullScreen
delegate?.loadNewScreen(controller: playerVC)
}
}

The methods above present the youtube player with giving Video ID. We are using YouTubePlayerDelegate method to autoplay the video.

func playerReady(_ videoPlayer: YouTubePlayerView) {
videoPlayer.play()
}

The player can be dismissed by tapping on the light black background.

Final Output –

Resources –

  1. Youtube iOS Player API
  2. SUSI API Sample Response for Playing Video
  3. SUSI iOS Link
Continue ReadingAdding Support for Playing Youtube Videos in SUSI iOS App

Correct the API for Downloading GitHub Content in SUSI.AI android app

The content from github in the SUSI.AI android app is downloaded through simple links and the data is parsed through them and is used in the app depending upon what needs to be done with that data at that time.

A simple example for this that was used in the app was  :

private val imageLink = “https://raw.githubusercontent.com/fossasia/susi_skill_data/master/models/general/”

Above is the link that is used to download and display the images of the skills in the app. All the api calls that generally take place in SUSI are through the SUSI server, and making the call to display the images for the skills which takes place through the github links should be replaced by making the calls to the SUSI server instead, as this is a terrible programming style, with this style the project cannot be cloned from other developers and it cannot be moved to other repositories.

We see that there was a huge programming style issue present in the android app and hence, it was fixed by adding the API that calls the SUSI server for external source images and removing the existing implementation that downloads the image from github directly.

Below is an example of the link to the API call made in the app that was needed for the request to be made to the SUSI server :

${BaseUrl.SUSI_DEFAULT_BASE_URL}/cms/getImage.png?model=${skillData.model}&language=${skillData.language}&group=${skillData.group}&image=${skillData.image}

The link is displayed in the kotlin string interpolation manner, here is what the actual URL would look like :

https://api.susi.ai/cms/getImage.pngmodel=${skillData.model}&language=${skillData.language}&group=${skillData.group}&image=${skillData.image}

Here the values with ‘$’ symbol are the parameters for the API taken from the SkillData.kt file and are put inside the link so that the image needed can be extracted.

Now, since we use this link to set the images, to avoid the duplicate code, an Object class was made for this purpose. The object class contained two functions, one for setting the image and one for parsing the skilldata object and forming a URL out of it. Here is the code for the object class :

object Utils {

  fun setSkillsImage(skillData: SkillData, imageView: ImageView) {
      Picasso.with(imageView.context)
              .load(getImageLink(skillData))
              .error(R.drawable.ic_susi)
              .fit()
              .centerCrop()
              .into(imageView)
  }

  fun getImageLink(skillData: SkillData): String {
      val link = “${BaseUrl.SUSI_DEFAULT_BASE_URL}/cms/getImage.png?model=${skillData.model}&language=${skillData.language}&group=${skillData.group}&image=${skillData.image}”
              .replace(” “,“%20”)
      Timber.d(“SUSI URI” + link)
      return link
  }
}

setSkillsImage() method sets the image in the ImageView and the getImageLink() method returns the image formed out of the SkillData object.

References

 

Continue ReadingCorrect the API for Downloading GitHub Content in SUSI.AI android app

Building SUSI.AI Android App with FDroid

Fdroid is an app store for Free and Open Source Software (FOSS). Building and hosting an app on Fdroid is not an easy process compared to when we host one on Google Play. A certain set of build checks are required to be done prior to making a merge request (which is similar to a pull request in GitHub) in the fdroid-data GitLab repository. SUSI.AI Android app has undergone through all these checks and tests and is now ready for the merge request to be made.

Setting up the fdroid-server and fdroid-data repositories is a separate thing and is fairly easy. Building the app using the tools provided by fdroid is another thing and is the one that causes the most problems. It will involve quite a few steps to get started. Fdroid requires all the apps need to be built using:

$ fdroid build -v -l ai.susi

This will output a set of logs which tell us what went wrong in the builds. The usual one in a first time app is obviously the build is not taking place at all. The reason is our metadata file needs to be changed to initiate a build.

The metadata file is used for the build process and contains all the information about the app. The metadata file for a.susi package was a .yaml file.

Builds:

 – versionName: 1.0.10

   versionCode: 11

   commit: 1ad2fd0e858b1256617e652c6c8ce1b8372473e6

   subdir: app

   gradle:

     – fdroid

This is the metadata reference file’s build section that is used for the build process using the command that was mentioned above.The versionName a nd versionCode is found in the build.gradle file in the app and commit denotes the commit-id of the latest commit that will be checked out and built, subdir shows the subdirectory of the app, here the subdirectory is the app file.

Next is the interesting stuff, since we are using flavors in the app, we have to mention in the gradle the flavor which we are using, in our case we are using the flavor by the name of “fdroid” and by mentioning this we can build only the “fdroid” flavor in the app.

Also when building the app there were many blockers that were faced, the reason for the usual build fails were :

1 actionable task: 1 executed
INFO: Scanning source for common problems…
ERROR: Found usual suspect ‘youtube.*android.*player.*api’ at app/libs/YouTubeAndroidPlayerApi.jar
WARNING: Found JAR file at app/libs/YouTubeAndroidPlayerApi.jar
WARNING: Found possible binary at app/src/main/assets/snowboy/alexa_02092017.umdl
WARNING: Found possible binary at app/src/main/assets/snowboy/common.res
ERROR: Found shared library at app/src/main/jniLibs/arm64-v8a/libsnowboy-detect-android.so
ERROR: Found shared library at app/src/main/jniLibs/armeabi-v7a/libsnowboy-detect-android.so
INFO: Removing gradle-wrapper.jar at gradle/wrapper/gradle-wrapper.jar
ERROR: Could not build app ai.susi: Can‘t build due to 3 errors while scanning
INFO: Finished
INFO: 1 build failed

The reason for these build fails were that fdroid does not allow us to use prebuilt files and any proprietary software if present, the above log indicates the two prebuilt files which should be removed and also the YouTubeAndroidPlayerApi.jar which is proprietary software and hence needs to removed. So, to remove the files that are not used in the fdroid flavor and exclude them in the build process, we have to include the following statements in the build section of the metadata reference file :

   rm:
     – app/src/main/jniLibs/arm64-v8a/libsnowboy-detect-android.so
     – app/src/main/jniLibs/armeabi-v7a/libsnowboy-detect-android.so
     – app/libs/YouTubeAndroidPlayerApi.jar

Once the metadata file is complete we are ready to run the build command once again. If you have properly set the environment in your local PC, build will end successfully assuming there were no Java or any other language syntax errors.

It is worth to mention few other facts which are common to Android software projects. Usually the source code is packed in a folder named “app” inside the repository and this is the common scenario if Android Studio builds up the project from scratch. If this “app” folder is one level below the root, that is “android/app”, the build instructions shown above will throw an error as it cannot find the project files.

The reason for this is as it is mentioned “subdir=app” in the metadata file. Change this to “subdir=android/app” and run the build again. The idea is to direct the build to find where the project files are.

Reference:

  1. Metadata : https://f-droid.org/docs/Build_Metadata_Reference/#Build
  2. Publish an app on fdroid: https://blog.fossasia.org/publish-an-open-source-app-on-fdroid/
Continue ReadingBuilding SUSI.AI Android App with FDroid

Show Option to choose WiFi for Smart Speaker Connection in SUSI.AI Android APP

SUSI.AI android app has the functionality to detect the available WiFi networks and among them check for the hotspot named “SUSI.AI”. Now, this process is required so that the app can connect to the smart speaker and send the WiFi credentials, the authentication credentials and the configuration data to the smart speaker.

After on clicking the “SET UP” button on the available SUSI.AI hotspot as shown in the image below,

The app needs to make API requests where the app will send the data to the speaker, the first API that needs to be hit is for the WiFi credentials. Once the “SET UP” button is clicked the app shows a message “Connecting to your device” with a loader as in the image below :

Now during this step the code to detect the available WiFi networks is run again and the list of the available networks is sent from the DeviceConnectPresenter.kt to the DeviceConnectFragment.kt from the function defined in the presenter as follows :

override fun availableWifi(list: List<ScanResult>) {
  connections = ArrayList<String>()
  for (i in list.indices) {
      connections.add(list[i].SSID)
  }
  if (!list.isEmpty()) {
      deviceConnectView?.setupWiFiAdapter(connections)
  } else {
      deviceConnectView?.onDeviceConnectionError(utilModel.getString(R.string.no_device_found), utilModel.getString(R.string.setup_tut))
  }
}

Now to show the list of available WiFi networks a new ViewHolder had to made which contained a textview and an imageview. The viewholder file named WifiViewHolder.java is responsible for this and is used along with the DeviceAdapter only.

The interesting thing to see is that the DeviceAdapter already inflates the ViewItems of the type DeviceViewHolder. Instead of making a new adapter for the WifiViewHolder view item, I wrote some smart code that handles both the viewholders with the same adapter i.e DeviceAdapter. Let’s now see how this was handled,

In the DeviceAdapter a private variable named viewCode of type integer was made and it would be responsible to segregate the two viewholders using an if command. Here is code below that shows how using viewcode variable allows us to choose and inflate a viewholder among the two we have in onCreateViewHolder() method :

if (viewCode == 1) {
  View v = LayoutInflater.from(parent.getContext()).inflate(R.layout.device_layout, parent, false);
  return new DeviceViewHolder(v, (DeviceConnectPresenter) devicePresenter);
} else {
  View v = LayoutInflater.from(parent.getContext()).inflate(R.layout.layout_wifi_item, parent, false);
  return new WifiViewHolder(v, (DeviceConnectPresenter) devicePresenter);
}

Now, when the viewholder of type WifiViewHolder is inflated the app displays a list of available wifi networks and on clicking any of the Wifi items the app asks the user to enter the credentials of the WiFi network. Below is how the app looks when the available wifi networks are loaded and displayed and also when we click on any of the item in the list.

References :

  1. Android manipulating Wifi networks using WiFiManager : https://medium.com/@josiassena/android-manipulating-wifi-using-the-wifimanager-9af77cb04c6a
  2. Kotlin broadcast intents and receivers : https://www.techotopia.com/index.php/Kotlin_Android_Broadcast_Intents_and_Broadcast_Receivers
  3. Android Viewholder pattern : https://www.javacodegeeks.com/2013/09/android-viewholder-pattern-example.html
Continue ReadingShow Option to choose WiFi for Smart Speaker Connection in SUSI.AI Android APP