Adding IBM Watson TTS Support in Susi Assistant on Raspberry Pi

Susi Hardware project aims at creating a smart assistant for your home that you can run on your Raspberry Pi or similar Development Boards.
I previously wrote a blog on choosing a perfect Text to Speech engine for Susi AI and had used Flite as the solution for it. While Flite is an Open Source solution that can run locally on a client, it does not provide the same quality of voice and speed as cloud providers. We always crave for a more natural voice for better interaction with our assistant. It is always good to have more options. We, therefore, added IBM Watson Text to Speech API in SUSI Hardware project.

IBM Watson TTS can be added to a Python Project easily using the IBM Watson Developer SDK.

For using the IBM Watson Developer SDK for Text to Speech, first of all, we need to sign up for Bluemix
https://console.bluemix.net/registration/

After that, we will get the empty dashboard without any service added currently. We need to create a Text to Speech Service. To do so, click on Create Watson Service button

    

Select Watson on the left pane and then select Text to Speech service from the list.

Select the standard plan from the options and then click on create button.

You will get service credentials for your newly created text to speech service. Save it for future reference.

After that, we need to add Watson developer cloud python package.

sudo pip3 install watson-developer-cloud

On Ubuntu with Python 3.5 watson-developer-cloud has some extra dependencies. Install them using the following command.

sudo apt install libssl-dev

Now we can add Text to Speech to our project. For that, we need to first import TextToSpeechV1 library. It can be added using following import statement.

from watson_developer_cloud import TextToSpeechV1

Now we need to create a new TextToSpeechV1 object using the Service Credentials we created earlier.

text_to_speech = TextToSpeechV1(
   username='API_USERNAME',
   password='API_PASSWORD')

We can now perform synthesis of a text input and write the incoming speech stream from IBM Watson API to a file.

with open('output.wav', 'wb') as audio_file:
   audio_file.write(
       text_to_speech.synthesize(text, accept='audio/wav’, voice='en-US_AllisonVoice'))

In the above code snippet,  we are opening an output file ‘output.wav’ for writing. We then write the binary audio data returned by text_to_speech.synthesize method. IBM Watson provides many free voices. We supply an argument specifying which voice we need to use. We are using English female ‘en-US_AllisonVoice’. You may test out more voices in the online demo here and select the voice that you find best.

We can play the ‘output.wav’ file using the play command from SoX. To do so, we need to install SoX binary.

sudo apt install sox libsox-fmt-all

We can play the file easily now using the following code.

import os
os.system('play output.wav')

The above code invokes the ‘play’ command from the SoX package to play the audio file. We can also use PyAudio to play the audio file but it would require us to manage the audio thread separately. Thus, SoX is a better solution.

Resources:

Continue ReadingAdding IBM Watson TTS Support in Susi Assistant on Raspberry Pi

Setup SUSI Assistant on Raspberry Pi in under 30 minutes

With our ever growing list of list of platforms supported by Susi AI, we now have a client that can run on Raspberry Pi and you can access it hands-free!! Here is a video that you can refer for its working.

But it might have left you wondering how you can replicate such a setup yourself? It is fairly easy and will be done fairly easy. Just follow the following instructions.

You need to have following hardware in order to have your own SUSI Assistant running on Raspberry Pi.

  • A Raspberry Pi (prefer 2 or 3) with Raspbian Jessie OS.
  • A stable internet connection.  ( Recommended 4 Mbps )
  • A USB Microphone /  USB Webcam with Microphone. You may buy one like this.
  • A Speaker that connects through 3.5mm jack. You may buy one like this.

After you get all the above items in order, you need to get access to a terminal of your Raspberry Pi. You can have that by either connecting a monitor to Raspberry Pi temporarily or by connecting to Raspberry Pi over SSH.

Once this is done, next step is the installation of the dependencies. The installation of the SUSI on Raspberry is automated after dependencies are installed. Run the following command on Raspberry Pi terminal.

sudo apt install git swig3.0 portaudio19-dev pulseaudio libpulse-dev unzip sox libatlas-dev libatlas-base-dev libsox-fmt-all python3

After this, you may check if your output and input devices are working alright. For this, run rec recording.wav . It will start recording audio and saving it to a file named recording.wav. Play back the file using play recording.wav If you hear your audio clearly, setup is done right else you need to configure your Audio Devices correctly.  Most of the time the configuration of Audio works out the box and devices are plug and play so you would not encounter any errors. If you are successful in configuring your devices, install extra dependencies for SUSI Hardware by running the automated install script. In your terminal run,

$ git clone https://github.com/fossasia/susi_hardware.git
$ cd susi_hardware
$ ./install.sh 

This will install all the remaining dependencies. After the above step is complete, you may run configuration file generator script to choose the Text to Speech and Speech to Text service according to your wish. For doing so, you need to run

$ python3 config_generator.py

Follow the instructions in the script. It will ask you to configure the default service for Text to Speech and Speech to Text and other options. After the configuration is complete, you can simply run the following command to start SUSI.

$ python3 main.py

This will start SUSI in a continuously listening mode. You may invoke SUSI anytime, just by saying SUSI followed by a query. The query will be answered by SUSI subsequently.

Since configurations for different hardware devices may vary, you may encounter some problems. In such a scenario, you may refer to the following resources to solve the issues.

Resources:

Continue ReadingSetup SUSI Assistant on Raspberry Pi in under 30 minutes

Building Meilix in Travis using Heroku

Suppose you have to trigger (start) Travis but not through making a commit but through clicking a button on the webapp of the Meilix Generator. Through the webapp of Meilix Generator, we can pass the tag of the build which will be initiated and can also get the build link which is built by Travis.
Heroku is the place where we have deployed our webapp and through a button on the webapp that we used to start the build on the Travis. We have the access to give a tag to the build and with the help of this, we can even predict the URL of the build beforehand. So one can use it for its own personal project in a number of ways. And how I used this feature in Meilix Generator using Meilix script is described below:

How I used this idea

FOSSASIA meilix repository consists the script of a Linux Operating System based on Lubuntu. It uses Travis to build that script to result in a release of an iso file.

Now we thought an idea of building an autonomous system to start this build and get the release and in the meanwhile also make some required changes to the script to get it into the OS. We came up with an idea of a webapp which ask user its email id and tag of the build and till now a picture from the user which will be set as a wallpaper. It means the user would be able to config its distro according to its need through the graphical interface without a single line to code from the user end.

Through the webapp, a build button is taken as an input to go to a build page which triggers the Travis with the same user configuration to build the iso and deploy it on Github page. The user gets the link to the build on the next page only.

How I implemented this idea

Thanks to Travis API without which our idea is impossible to implement. We used a shell script to outframe our idea. The script takes the input of the user’s, repository, and branch to decide to where the trigger to take place.

There are two files one as travis_token as:

fossasia meilix master    # in the format of user repo branch

And script.sh as:  

#!/bin/bash
cat travis_tokens | while read line;     # this lines takes input of the user, repo and branch
do
    array=(${line})
    user="${array[0]}"
    project="${array[1]}"
    len=${#array[@]}
    for ((i=2; i<len; i++)); do
        branch="${array[i]}"            # supplied each value as variable
        body="{\"request\":{
            \"branch\":\"${branch}\",
            \"config\":{
                \"env\":{
                    \"email\":\"${email}\",    # supplied email and travis tag as environment variable
                    \"TRAVIS_TAG\":\"${TRAVIS_TAG}\"
                }
            }
    }}"
    echo "This Link Will be ready in approx 20 minutes"
    echo "https://github.com/fossasia/meilix/releases/download/${TRAVIS_TAG}/meilix-zesty-`date +%Y%m%d`-i386.iso"                  # a pre-predication of the link, we provide tag from user and date from system.
        curl -s -X POST \           # sending an API POST request to Travis to trigger the build of most recent commit 
            -H "Content-Type: application/json" \
            -H "Accept: application/json" \
            -H "Travis-API-Version: 3" \
            -H "Authorization: token ${KEY}" \     # this is stored in Heroku as KEY as environment variable and supplied from there only
            -d "${body}" \
            "https://api.travis-ci.org/repo/${user}%2F${project}/requests"  #%2 is used to interpret user and repo name as a single URL segment.
    done
done

After the trigger, you will get email which consists of a downloadable link to the iso.

How can this idea be helpful to a developer

There are lots of ways a developer can use this idea out. If a developer wants their user to automatically trigger the build and get the release build directly.

One can use it to set even the commit message through the shell script and customizing build configuration like replace, merge or deep_merge a configuration with the original .travis.yml file present in source repo.

Useful repositories and link which uses this:

Know more about Travis API v3:
Triggering the build
API blog

Have a look at our webapp and generate your own iso:
https://melix-generator.herokuapp.com/

Source code here:
https://github.com/fossasia/meilix-generator
https://github.com/fossasia/meilix

Continue ReadingBuilding Meilix in Travis using Heroku

Making loklak Server’s Kaizen Harvester Extendable

Harvesting strategies in loklak are something that the end users can’t see, but they play a vital role in deciding the way in which messages are collected with loklak. One of the strategies in loklak is defined by the Kaizen Harvester, which generates queries from collected messages.

The original strategy used a simple hash queue which drops queries once it is full. This effect is not desirable as we tend to lose important queries in this process if they come up late while harvesting. To overcome this behaviour without losing important search queries, we needed to come up with new harvesting strategy(ies) that would provide a better approach for harvesting. In this blog post, I am discussing the changes made in the kaizen harvester so it can be extended to create different flavors of harvesters.

What can be different in extended harvesters?

To make the Kaizen harvester extendable, we first needed to decide that what are the parts in the original Kaizen harvester that can be changed to make the strategy different (and probably better).

Since one of the most crucial part of the Kaizen harvester was the way it stores queries to be processed, it was one of the most obvious things to change. Another thing that should be allowed to configure across various strategies was the decision of whether to go for harvesting the queries from the query list.

Query storage with KaizenQueries

To allow different methods of storing the queries, KaizenQueries class was introduced in loklak. It was configured to provide basic methods that would be required for a query storing technique to work. A query storing technique can be any data structure that we can use to store search queries for Kaizen harvester.

public abstract class KaizenQueries {

     public abstract boolean addQuery(String query);

     public abstract String getQuery();

     public abstract int getSize();

     public abstract int getMaxSize();

     public boolean isEmpty() {
         return this.getSize() == 0;
     }
}

[SOURCE]

Also, a default type of KaizenQueries was introduced to use in the original Kaizen harvester. This allowed the same interface as the original queue which was used in the harvester.

Another constructor was introduced in Kaizen harvester which allowed setting the KaizenQueries for an instance of its derived classes. It solved the problem of providing an interface of KaizenQueries inside the Kaizen harvester which can be used by any inherited strategy –

private KaizenQueries queries = null;

public KaizenHarvester(KaizenQueries queries) {
    ...
    this.queries = queries;
    ...
}

public void someMethod() {
    ...
    this.queries.addQuery(someQuery);
    ...
}

[SOURCE]

This being added, getting new queries or adding new queries was a simple. We just need to use getQuery() and addQuery() methods without worrying about the internal implementations.

Configurable decision for harvesting

As mentioned earlier, the decision taken for harvesting should also be configurable. For this, a protected method was implemented and used in harvest() method –

protected boolean shallHarvest() {
    float targetProb = random.nextFloat();
    float prob = 0.5F;
    if (this.queries.getMaxSize() > 0) {
        prob = queries.getSize() / (float)queries.getMaxSize();
    }
    return !this.queries.isEmpty() && targetProb < prob;
}

@Override
public int harvest() {
    if (this.shallHarvest()) {
        return harvestMessages();
    }

    grabSuggestions();

    return 0;
}

[SOURCE]

The shallHarvest() method can be overridden by derived classes to allow any type of harvesting decision that they want. For example, we can configure it in such a way that it harvests if there are any queries in the queue, or to use a different probability distribution instead of linear (maybe gaussian around 1).

Conclusion

This blog post explained about the changes made in loklak’s Kaizen harvester which allowed other harvesting strategies to be built on its top. It discussed the two components changed and how they allowed ease of inheriting the original Kaizen harvester. These changes were proposed in PR loklak/loklak_server#1203 by @singhpratyush (me).

Resources

Continue ReadingMaking loklak Server’s Kaizen Harvester Extendable

Accessing Child Component’s API in Loklak Search

Loklak search being an angular application, comprises of components. Components provide us a way to organize the application in a more consistent way, along with providing the ability to reuse code in the application. Each component has two type of API’s public and private. Public API is the API which it exposes to the outer world for manipulating the working of the component, while private API is something which is local to the component and cannot be directly accessed by the outside world. Now when this distinction between the two is clear, it is important to state the need of these API’s, and why are they required in loklak search.

The components can never live in isolation, i.e. they have to communicate with their parent to be able to function properly. Same is the case with components of loklak search. They have to interact with others to make the application work. So how this, interaction looks like,

The rule of thumb here is, data flows down, events flow up. This is the core idea of all the SPA frameworks of modern times, unidirectional data flow, and these interactions can be seen everywhere in loklak search.

<feed-header
   [query]="query"
   (searchEvent)="doSearch($event)"></feed-header>

This is how a simple component’s API looks in loklak search. Here our component is FeedHeader and it exposes some of it’s API as inputs and outputs.

export class FeedHeaderComponent {

 @Input() query: string;

 @Output() searchEvent: EventEmitter<string> = new EventEmitter<string>();

  // Other methods and properties of the component
}

The FeedHeaderComponent ‘s class defines some inputs which it takes. These inputs are the data given to the component. Here the input is a simple query property, and the parent at the time of instantiating the component, passes the value to it’s child as [query]=”query”. This enables the one direction of API, from parent to child. Now, we also need a way for parent to be able to events generated by the child on interaction with user. For example, here we need to have a way to tell the parent to perform a search whenever user presses search button. For this the Output property searchEvent is used. The search event can be emitted by the child component independently. While the parent, if it wants to listen to child components simply do so by binding to the event and running a corresponding function whenever event is emitted (searchEvent)=”doSearch($event)”. Here the event which parent listens to is searchEvent and whenever such an event is emitted by the child a function doSearch is run by the parent. Thus this completes the event flow, from child to parent.

Now it is worth noticing that all these inputs for data and outputs for events is provided by the child component itself. They are the API of the child and parent’s job is just to bind to these inputs and outputs to bind to data and listen to events. This allows the component interactions in both directions.

@ViewChild and triggering child’s methods

The inputs are important to carry data from the parent to the child, declaratively but sometimes it is necessary for the parent to access the public API of it’s child more directly, specially the API methods to trigger an action. These methods require the way for the parent to access its child component. This is done by @ViewChild decorator. The child element which the parent wants access to, have to declare the component as, one of it’s attributes. Like in our example, the FeedHeaderComponent needs access to its child component SuggestBoxComponent, to show/hide suggest box as and when required. So here the feed header component gets the access to its child using viewchild decorator.

export class FeedHeaderComponent {

  @ViewChild(‘#suggestBox) suggestBox: SuggestBoxComponent;

  // Other properties and methods

  toggleSuggestBox() {

     this.suggestBox.toggle();
  }
}

The SuggestBoxComponent here has a public method toggle() which toggles the visibility state of the suggest box. This method is available as a component’s public API method. The parent of this component calls this method using the @ViewChild reference which it grabbed at the time of view instantiation.

export class SuggestBoxComponent {

  private suggestBoxVisible = true;

  public toggle() {

     this.suggestBoxVisible = !this.suggestBoxVisible;
  }
}

Resources and Links

  • Angular document pages
  • Basic Usage in Angular tour of heroes tutorial
  • In depth usage blog for Inputs and Outputs SitePoint Tutorial
  • Loklak Search Repo
Continue ReadingAccessing Child Component’s API in Loklak Search

Building the Meilix Generator with Flask

Meilix Generator is a webapp which is used to trigger the Travis build of Meilix and mail the user the link of the iso. Meilix Generator webapp is based on Flask. This blog shows that how easy is to build a webapp and take the HTML files to render it into the webapp as well as to call and pass various function. Here I used Flask, the Python framework to render the HTML templates and send requests for various purposes (mentioned later in the article) without coding everything from scratch because of import facility of the Flask.

What is Flask?

Flask is a Python micro web framework based on Werkzeug, Jinja 2 template engine. It is used as the backbone of the webapp. It features us with a whole set of Python from which we can easily generate webapp. It is micro as it has no tools and no library itself. It come up with minimum requirements and one who needs can import different library and use it. And I used several import function for Meilix Generator like render_template, send_from_directory, etc.

Implementation (The use case in Meilix Generator)

First of all, the installation process: We will do the installation in a virtual environment. We prefer virtual environment to differentiate the Python working environment since few programs are there which require different Python versions to work.
Install virtual environment 

sudo pip install virtualenv

Now go to the folder (project) and activate it using

. venv/bin/activate

Now install Flask

pip install flask
Creating your project

Now it’s time to create a simple project in the directory.
Let’s use HTML as the frontend. In the folder create styles.css for styling and index.html template for the frontend of the page.We will make one app.py file which would look similar to this: 

from flask import Flask, render_template
app = Flask(__name__)
@app.route('/')
def index():
	"""Index page"""
	return render_template("index.html")
if __name__ == '__main__':
    app.run()

Flask looks for the / (root) path and here the root return the main template (index.html) which is the main function.

Compiling it to view the page:

export FLASK_DEBUG=1 FLASK_APP=app.py
flask run

You will find your page at http://127.0.0.1:5000

More options (how more it can help you)

  • Add more HTML template options and refer it in app.py
  • Easily use Github API  from a different .py file (this file should get import to app.py) to fetch data like: https://api.github.com/users/user_name : It will fetch user name, repos, followers and many more important information.

How I used this idea for FOSSASIA (Meilix Generator)

I used Flask for the backbone of project Meilix Generator. First, I used from function to import various library needed for the project and then made several functions for the same. Let’s understand the concept using few example:

from flask import Flask, render_template
@app.route('/about')
def about():
		#About page
		return render_template("about.html")

or

from flask import Flask, send_from_directory
@app.route('/uploads/<filename>')
def uploaded_file(filename):
		return send_from_directory(app.config['UPLOAD_FOLDER'],filename)

For more details file app.py can be found here of the Meilix Generator repository where we used the above idea.

Important Links and Repositories:

Continue ReadingBuilding the Meilix Generator with Flask

How to Store and Retrieve User Settings from SUSI Server in SUSI iOS

Any user using the SUSI iOS client can set preferences like enabling or disabling the hot word recognition or enabling input from the microphone. These settings need to be stored, in order to be used across all platforms such as web, Android or iOS. Now, in order to store these settings and maintain a synchronization between all the clients, we make use of the SUSI server. The server provides an endpoint to retrieve these settings when the user logs in.

First, we will focus on storing settings on the server followed by retrieving settings from the server. The endpoint to store settings is as follows:

http://api.susi.ai/aaa/changeUserSettings.json?key=key&value=value&access_token=ACCESS_TOKEN

This takes the key value pair for storing a settings and an access token to identify the user as parameters in the GET request. Let’s start by creating the method that takes input the params, calls the API to store settings and returns a status specifying if the executed successfully or not.

 let url = getApiUrl(UserDefaults.standard.object(forKey: ControllerConstants.UserDefaultsKeys.ipAddress) as! String, Methods.UserSettings)

        _ = makeRequest(url, .get, [:], parameters: params, completion: { (results, message) in
            if let _ = message {
                completion(false, ResponseMessages.ServerError)
            } else if let results = results {
                guard let response = results as? [String : AnyObject] else {
                    completion(false, ResponseMessages.ServerError)
                    return
                }
                if let accepted = response[ControllerConstants.accepted] as? Bool, let message = response[Client.UserKeys.Message] as? String {
                    if accepted {
                        completion(true, message)
                        return
                    }
                    completion(false, message)
                    return
                }
            }
        })

Let’s understand this function line by line. First we generate the URL by supplying the server address and the method. Then, we pass the URL and the params in the `makeRequest` method which has a completion handler returning a results object and an error object. Inside the completion handler, check for any error, if it exists mark the request completed with an error else check for the results object to be a dictionary and a key `accepted`, if this key is `true` our request executed successfully and we mark the request to be executed successfully and finally return the method. After making this method, it needs to be called in the view controller, we do so by the following code.

Client.sharedInstance.changeUserSettings(params) { (_, message) in
  DispatchQueue.global().async {
    self.view.makeToast(message)
  }
}

The code above takes input params containing the user token and key-value pair for the setting that needs to be stored. This request runs on a background thread and displays a toast message with the result of the request.

Now that the settings have been stored on the server, we need to retrieve these settings every time the user logs in the app. Below is the endpoint for the same:

http://api.susi.ai/aaa/listUserSettings.json?access_token=ACCESS_TOKEN

This endpoint accepts the user token which is generated when the user logs in which is used to uniquely identify the user and his/her settings are returned. Let’s create the method that would call this endpoint and parse and save the settings data in the iOS app’s User Defaults.

if let _ = message {
  completion(false, ResponseMessages.ServerError)
} else if let results = results {
  guard let response = results as? [String : AnyObject] else {
    completion(false, ResponseMessages.ServerError)
    return
  }
  guard let settings = 
response[ControllerConstants.Settings.settings.lowercased()] as? [String:String] else {
    completion(false, ResponseMessages.ServerError)
    return
  }
  for (key, value) in settings {
    if value.toBool() != nil {
      UserDefaults.standard.set(value.toBool()!, forKey: key)
    } else {
      UserDefaults.standard.set(value, forKey: key)
    }
  }
  completion(true, response[Client.UserKeys.Message] as? String ?? "error")
}

Here, the creation of the URL is same as we created above the only difference being the method passed. We parse the settings key value into a dictionary followed by a loop which loop’s through all the keys and stores the value in the User Defaults for that key. We simply call this method just after user log in as follows:

Client.sharedInstance.fetchUserSettings(params as [String : AnyObject]) { (success, message) in
  DispatchQueue.global().async {
    print("User settings fetch status: \(success) : \(message)")
  }
}

That’s all for this tutorial where we learned how to store and retrieve settings on the SUSI Server.

References

Continue ReadingHow to Store and Retrieve User Settings from SUSI Server in SUSI iOS

Save Chat Messages using Realm in SUSI iOS

Fetching data from the server each time causes a network load which makes the app depend on the server and the network in order to display data. We use an offline database to store chat messages so that we can show messages to the user even if network is not present which makes the user experience better. Realm is used as a data storage solution due to its ease of usability and also, since it’s faster and more efficient to use. So in order to save messages received from the server locally in a database in SUSI iOS, we are using Realm and the reasons for using the same are mentioned below.

The major upsides of Realm are:

  • It’s absolutely free of charge,
  • Fast, and easy to use.
  • Unlimited use.
  • Work on its own persistence engine for speed and performance

Below are the steps to install and use Realm in the iOS Client:

Installation:

  • Install Cocoapods
  • Run `pod repo update` in the root folder
  • In your Podfile, add use_frameworks! and pod ‘RealmSwift’ to your main and test targets.
  • From the command line run `pod install`
  • Use the `.xcworkspace` file generated by Cocoapods in the project folder alongside `.xcodeproj` file

After installation we start by importing `Realm` in the `AppDelegate` file and start configuring Realm as below:

func initializeRealm() {
        var config = Realm.Configuration(schemaVersion: 1,
            migrationBlock: { _, oldSchemaVersion in
                if (oldSchemaVersion < 0) {
                    // Nothing to do!
                }
        })
        config.fileURL = config.fileURL?.deletingLastPathComponent().appendingPathComponent("susi.realm")
        Realm.Configuration.defaultConfiguration = config
}

Next, let’s head over to creating a few models which will be used to save the data to the DB as well as help retrieving that data so that it can be easily used. Since Susi server has a number of action types, we will cover some of the action types, their model and how they are used to store and retrieve data. Below are the currently available data types, that the server supports.

enum ActionType: String {
  case answer
  case websearch
  case rss
  case table
  case map 
  case anchor
}

Let’s start with the creation of the base model called `Message`. To make it a RealmObject, we import `RealmSwift` and inherit from `Object`

class Message: Object {
  dynamic var queryDate = NSDate()
  dynamic var answerDate = NSDate()
  dynamic var message: String = ""
  dynamic var fromUser = true
  dynamic var actionType = ActionType.answer.rawValue
  dynamic var answerData: AnswerAction?
  dynamic var mapData: MapAction?
  dynamic var anchorData: AnchorAction?
}

Let’s study these properties of the message one by one.

  • `queryDate`: saves the date-time the query was made
  • `answerDate`: saves the date-time the query response was received
  • `message`: stores the query/message that was sent to the server
  • `fromUser`: a boolean which keeps track who created the message
  • `actionType`: stores the action type
  • `answerData`, `rssData`, `mapData`, `anchorData` are the data objects that actually store the respective action’s data

To initialize this object, we need to create a method that takes input the data received from the server.

// saves query and answer date
if let queryDate = data[Client.ChatKeys.QueryDate] as? String,
let answerDate = data[Client.ChatKeys.AnswerDate] as? String {
  message.queryDate = dateFormatter.date(from: queryDate)! as NSDate
  message.answerDate = dateFormatter.date(from: answerDate)! as NSDate}if let type = action[Client.ChatKeys.ResponseType] as? String,
  let data = answers[0][Client.ChatKeys.Data] as? [[String : AnyObject]] {
  if type == ActionType.answer.rawValue {
     message.message = action[Client.ChatKeys.Expression] as! String
     message.actionType = ActionType.answer.rawValue
    message.answerData = AnswerAction(action: action)
  } else if type == ActionType.map.rawValue {
    message.actionType = ActionType.map.rawValue
    message.mapData = MapAction(action: action)
  } else if type == ActionType.anchor.rawValue {
    message.actionType = ActionType.anchor.rawValue
    message.anchorData = AnchorAction(action: action)
    message.message = message.anchorData!.text
  }
}

Since, the response from the server for a particular query might contain numerous action types, we create loop inside a method to capture all those action types and save each one of them. Since, there are multiple action types thus we need a list containing all the messages created for the action types. For each action in the loop, corresponding data is saved into their specific objects.

Let’s discuss the individual action objects now.

  • AnswerAction
class AnswerAction: Object {
  dynamic var expression: String = ""
  convenience init(action: [String : AnyObject]) {
    self.init()
    if let expression = action[Client.ChatKeys.Expression] as? String {
      self.expression = expression
    }
  }
}

 This is the simplest action type implementation. It contains a single property `expression` which is a string type. For initializing it, we take the action object and extract the expression key-value and save it.

if type == ActionType.answer.rawValue {
  message.message = action[Client.ChatKeys.Expression] as! String
  message.actionType = ActionType.answer.rawValue
  // pass action object and save data in `answerData`
  message.answerData = AnswerAction(action: action)
}

Above is the way an answer action is checked and data saved inside the `answerData` variable.

2)   MapAction

class MapAction: Object {
  dynamic var latitude: Double = 0.0
  dynamic var longitude: Double = 0.0
  dynamic var zoom: Int = 13

  convenience init(action: [String : AnyObject]) {
    self.init()
    if let latitude = action[Client.ChatKeys.Latitude] as? String,
    let longitude = action[Client.ChatKeys.Longitude] as? String,
    let zoom = action[Client.ChatKeys.Zoom] as? String {
      self.longitude = Double(longitude)!
      self.latitude = Double(latitude)!
      self.zoom = Int(zoom)!
    }
  }
}

This action implementation contains three properties, `latitude` `longitude` `zoom`. Since the server responds the values inside a string, each of them need to be converted to their respective type using force-casting. Default values are provided for each property in case some illegal value comes from the server.

3)   AnchorAction

class AnchorAction: Object {
  dynamic var link: String = ""
  dynamic var text: String = ""

  convenience init(action: [String : AnyObject]) {
    self.init()if let link = action[Client.ChatKeys.Link] as? String,
    let text = action[Client.ChatKeys.Text] as? String {
      self.link = link
      self.text = text
    }
  }
}

Here, the link to the openstreetmap website is saved in order to retrieve the image for displaying.

Finally, we need to call the API and create the message object and use the `write` clock of a realm instance to save it into the DB.

if success {
  self.collectionView?.performBatchUpdates({
    for message in messages! {
    // real write block
      try! self.realm.write {
        self.realm.add(message)
        self.messages.append(message)
        let indexPath = IndexPath(item: self.messages.count - 1, section: 0)
        self.collectionView?.insertItems(at: [indexPath])
      }
   }
}, completion: { (_) in
    self.scrollToLast()
  })
}

list of message items and inserted into the collection view.Below is the output of the Realm Browser which is a UI for viewing the database.

References:

Continue ReadingSave Chat Messages using Realm in SUSI iOS

Adding Tumblr Upload Feature in Phimpme

The Phimpme Android application along with various other cloud storage and social media upload features provides an option to upload the images on Tumblr without having to download any other applications. In this post, I will be explaining how I integrated Tumblr in phimpme as there is no proper guide on the web how to integrate to Tumblr in Android. Tumblr provides an Android-SDK but there is no proper documentation to it and is not enough to authenticate and upload the images to it. After so much research I came to a solution. So read this article to know how to integrate Tumblr in Android.

Step 1:

First, add two dependencies to your project one is for Android SDK of Tumblr and one is for loglr which help you to get login on Tumblr.

dependencies {

compile 'com.daksh:loglr:1.2.1'

compile 'com.tumblr:jumblr:0.0.11'

}

Step 2:

  1. Register your app on Tumblr to obtain developer keys.
  2. Enter callback URL it is important to get keys.
  3. Generate CONSUMER_KEY & CONSUMER_SECRET from the official developer console of Tumblr.

Register your application

Step 3:

Now we use Loglr library to log in to Tumblr. Tumblr doesn’t provide any library to login so I am using Loglr library for login Tumblr. After successfully log in Loglr will return API_KEY and API_SECRET. We will use these keys later to upload the image. Save these keys as constant variables.

public final static String TUMBLR_CONSUMER_KEY = "ENTER-CONSUMER-KEY";

public final static String TUMBLR_CONSUMER_SECRET = "ENTER-CONSUMER-SECRET";

Step 4:

To authenticate the Tumblr use loglr login instance and it can be done as follows.

Loglr.getInstance()

     .setConsumerKey(Constants.TUMBLR_CONSUMER_KEY)

     .setConsumerSecretKey(Constants.TUMBLR_CONSUMER_SECRET)

     .setLoginListener(loginListener)

     .setExceptionHandler(exceptionHandler)

     .enable2FA(true)

     .setUrlCallBack(Constants.CALL_BACK_TUMBLR)

     .initiateInActivity(AccountActivity.this);

After that you will be prompt to enter your tumblr credentials to authenticate the phimpme Android app. Once you have done it will return api_token and api_secret. Now save this in database.

account.setToken(loginResult.getOAuthToken());

account.setSecret(loginResult.getOAuthTokenSecret());

Step 5:

Once the authentication is done now we can upload an image directly to Tumblr from the Share activity in the Phimpme Android application. To upload an image create an async task so that uploading process will run in a background thread and not block the main UI thread. Keep in mind Tumblr require 4 variable to create Tumblr client CONSUMER_KEY, CONSUMER_SECRET, API_KEY and API_SECRET. Now we can create a Tumblr client using these 4 values. Once the client is created we are ready to get data from Tumblr and upload an image on Tumblr. Before uploading an image on Tumblr we need blog name because the user can have multiple blogs on Tumblr so we need to ask the user to choose a blog name from the list or we can provide dialog to enter blog name manually. Now enter the following code in the doInBackground() method of asynctask.

PhotoPost post = null;

try {

 post = client.newPost(user.getBlogs().get(0).getName(), PhotoPost.class);

 if (caption!=null && !caption.isEmpty())

 post.setCaption(caption);

 post.setData(new File(imagePath));

 post.save();

} catch (IllegalAccessException | InstantiationException e) {

 success = false;

}

If success variable is true that means our image is uploaded successfully. This is how I implemented the upload feature to Tumblr using two different libraries. To get the full source code, please refer to the Phimpme Android repository.

Resources:

Continue ReadingAdding Tumblr Upload Feature in Phimpme

Uploading Images to Box Storage from the Phimpme Application

The Phimpme Android application along with many other cloud storage applications integrated like Dropbox, Imgur, Pinterest has the option to upload the image to the Box storage without having to install any other applications on the device. From the Phimpme app, the user can click the photo, edit it, view any image from the gallery and then can upload multiple images to many storage services or social media without any hassle. In this tutorial, I will be discussing how we achieved the functionality to upload the images on the Box storage.

Step 1:

To integrate Box storage to the application, the first thing we need to do is create an application from the Box developers console and get the CLIENT_ID and the CLIENT_SECRET. To do this:

  1. Go to the Box developer page and log in.
  2. Create a new application from the app console.
  3. It will now give options to select what kind of application are you using. Select the option partner integration to get the image upload functionality to all the users.
  4. Give a name to your application and finally click the create application button.
  5. After this, you will be taken to the screen similar to the below screenshot from where you can copy the CLIENT_ID and CLIENT_SECRET to be used later on.

Step 2:

Now coming to the Android part, to get the image upload functionality to the box storage, we need to make use of the Android SDK provided by Box to achieve the uploading functionality easily. To add the SDK to the Android project from Gradle, copy the following line of Gradle script which will download and install the SDK from the maven repository.

maven{ url "https://mvnrepository.com/artifact/com.box/box-android-sdk" }
compile group: 'com.box', name: 'box-android-sdk', version: '4.0.8'

Now after adding the SDK, rebuild the project.

Step 3:

After this, to upload the file on the box storage, we need to login to the Box account to get the access token and the user name to store it in Realm database. For this, first we have to configure the Box client. This can be done using the following line of code.

BoxConfig.CLIENT_ID = BOX_CLIENT_ID;
BoxConfig.CLIENT_SECRET = BOX_CLIENT_SECRET;

Replace the BOX_CLIENT_ID and BOX_CLIENT_SECRET in the above code with the key received by following the step 1.

After configuring the Box API client, we can authenticate the user using the sessionBox object by following lines of code.

sessionBox = new BoxSession(AccountActivity.this);
sessionBox.authenticate();

After the authentication, we have to get the user name and the access token of the user using the authenticated session box.

account.setUsername(sessionBox.getUser().getName());
account.setToken(String.valueOf(accessToken));

After authenticating the user, we can upload the photos directly to the Box storage from the Share Activity in the Phimpme Android application. For this, we have to create a new Asynchronous task in android which will do the network operations in the background to avoid the Network on main thread exception. After this, we can make use of the Box file API to upload the photos to the Box storage and call the function getUploadRequest which takes the three parameters file input stream, the upload name of the file and the destination of the folder respectively. This can be done by the following lines of code.

mFileApi = new BoxApiFile(sessionBox);
BoxRequestsFile.UploadFile request = mFileApi.getUploadRequest(inputStream, uploadName, destinationFolderId);

This upload request throws a BoxException so we have to catch that exception using the try/catch block to avoid the application crash in case the upload request fails.

This is how we have implemented the upload to Box storage functionality in the Phimpme Android application. To get the full source code, please refer to the Phimpme Android repository.

Resources

  1. Box Developers app console : https://app.box.com/developers/console/
  2. Box Android SDK GitHub page : https://github.com/box/box-android-sdk
  3. Android’s Developer page on NetworkOnMainThreadExceoption : https://developer.android.com/reference/android/os/NetworkOnMainThreadException.html
Continue ReadingUploading Images to Box Storage from the Phimpme Application