Adding 3D Home Screen Quick Actions to SUSI iOS App

Home screen quick actions are a convenient way to perform useful, app-specific actions right from the Home screen, using 3D Touch. Apply a little pressure to an app icon with your finger—more than you use for tap and hold—to see a list of available quick actions. Tap one to activate it. Quick actions can be static or dynamic.

We have added some 3D home screen quick action to our SUSI iOS app. In this post, we will see how they are implemented and how they work.

The following 3D home screen quick actions are added to SUSI iOS:

  • Open SUSI Skills – user can directly go to SUSI skills without opening a chat screen.
  • Customize Settings – user can customize their setting directly by using this quick action.
  • Setup A Device – when the user quickly wants to configure his/her device for SUSI Smart Speaker, this is quick action is very helpful in that.
  • Change SUSI’s Voice – user can change SUSI message reading language accents directly from this quick action.

Each Home screen quick action includes a title, an icon on the left or right (depending on your app’s position on the home screen), and an optional subtitle. The title and subtitle are always left-aligned in left-to-right languages.

Step 1 – Adding the Shortcut Items

We add static home screen quick actions using the UIApplicationShortcutItems array in the app Info.plist file. Each entry in the array is a dictionary containing items matching properties of the UIApplicationShortcutItem class. As seen in screenshot below, we have 4 shortcut items and each item have three properties UIApplicationShortcutItemIconType/UIApplicationShortcutItemIconFile, UIApplicationShortcutItemTitle, and UIApplicationShortcutItemType.

  • UIApplicationShortcutItemIconType and UIApplicationShortcutItemIconFile is the string for setting icon for quick action. For the system icons, we use UIApplicationShortcutItemIconType property and for the custom icons, we use UIApplicationShortcutItemIconFile.
  • UIApplicationShortcutItemTitle is a required string that is displayed to the user.
  • UIApplicationShortcutItemType is a required app specific string used to identify the quick action.

Step 2 – Handling the Shortcut

AppDelegate is the place where we handle all the home screen quick actions. We define these variables:

var shortcutHandled: Bool!
var shortcutIdentifier: String?

When a user chooses one of the quick actions the launch of the system or resumes the app and calls the performActionForShortcutItem method in app delegate:

func application(_ application: UIApplication,
                     performActionFor shortcutItem: UIApplicationShortcutItem,
                     completionHandler: @escaping (Bool) -> Void) {
        shortcutIdentifier = shortcutItem.type
        shortcutHandled = true
        completionHandler(shortcutHandled)
    }

Whenever the application becomes active, applicationDidBecomeActive function is called:

func applicationDidBecomeActive(_ application: UIApplication) {
        // Handel Home Screen Quick Actions
        handelHomeActions()
    }

Inside the applicationDidBecomeActive function we call the handleHomeAction() method which handles the home screen quick action.

func handelHomeActions() {
       if shortcutHandled == true {
           shortcutHandled = false
           if shortcutIdentifier == ControllerConstants.HomeActions.openSkillAction {
               // Handle action accordingly
               }
           } else if shortcutIdentifier == ControllerConstants.HomeActions.customizeSettingsAction {
               // Handle action accordingly
           } else if shortcutIdentifier == ControllerConstants.HomeActions.setupDeviceAction {
               // Handle action accordingly
               }
           } else if shortcutIdentifier == ControllerConstants.HomeActions.changeVoiceAction {
               // Handle action accordingly
               }
           }
       }
   }

Final Output:

Resources –

  1. Home Screen Quick Actions – Human Interface Guidelines by Apple
  2. Adding 3D Touch Quick Actions by Use Your Leaf
  3. Apple’s documentation on performActionFor:completionHandler
Continue ReadingAdding 3D Home Screen Quick Actions to SUSI iOS App

Adding Support for Playing Youtube Videos in SUSI iOS App

SUSI supports very exciting features in chat screen, from simple answer type to complex map, RSS, table etc type responses. Even user can ask SUSI for the image of anything and SUSI response with the image in the chat screen. What if we can play the youtube video from SUSI, we ask SUSI for playing videos and it can play youtube videos, isn’t it be exciting? Yes, SUSI can play youtube videos too. All the SUSI clients (iOS, Android, and Web) support playing youtube videos in chat.

Google provides a Youtube iFrame Player API that can be used to play videos inside the app only instead of passing an intent and playing the videos in the youtube app. iFrame API provide support for playing youtube videos in iOS applications.

In this post, we will see how playing youtube video features implemented in SUSI iOS.

Getting response from server side –

When we ask SUSI for playing any video, in response, we get youtube Video ID in video_play action type. SUSI iOS make use of Video ID to play youtube video. In response below, you can see that we are getting answer action type and in the expression of answer action type, we get the title of the video.

actions:
[
{
type: "answer",
expression: "Playing Kygo - Firestone (Official Video) ft. Conrad Sewell"
},
{
identifier: "9Sc-ir2UwGU",
identifier_type: "youtube",
type: "video_play"
}
]

Integrating youtube player in the app –

We have a VideoPlayerView that handle all the iFrame API methods and player events with help of YTPlayer HTML file.

When SUSI respond with video_play action, the first step is to register the YouTubePlayerCell and present the cell in collectionView of chat screen.

Registering the Cell –

register(_:forCellWithReuseIdentifier:) method registers a class for use in creating new collection view cells.

collectionView?.register(YouTubePlayerCell.self, forCellWithReuseIdentifier: ControllerConstants.youtubePlayerCell)

 

Presenting the YouTubePlayerCell –

Here we are presenting the cell in chat screen using cellForItemAt method of UICollectionView.

if message.actionType == ActionType.video_play.rawValue {
if let cell = collectionView.dequeueReusableCell(withReuseIdentifier: ControllerConstants.youtubePlayerCell, for: indexPath) as? YouTubePlayerCell {
cell.message = message
cell.delegate = self
return cell
}
}

 

Setting size for cell –

Using sizeForItemAt method of UICollectionView to set the size.

if message.actionType == ActionType.video_play.rawValue {
return CGSize(width: view.frame.width, height: 158)
}

In YouTubePlayerCell, we are displaying the thumbnail of youtube video using UIImageView. Following method is using to get the thumbnail of particular video by using Video ID –

  1. Getting thumbnail image from URL
  2. Setting image to imageView
func downloadThumbnail() {
if let videoID = message?.videoData?.identifier {
let thumbnailURLString = "https://img.youtube.com/vi/\(videoID)/default.jpg"
let thumbnailURL = URL(string: thumbnailURLString)
thumbnailView.kf.setImage(with: thumbnailURL, placeholder: ControllerConstants.Images.placeholder, options: nil, progressBlock: nil, completionHandler: nil)
}
}

We are adding a play button in the center of thumbnail view so that when the user clicks play button, we can present player.

On clicking the Play button, we are presenting the PlayerViewController, which hold all the player setups, by overFullScreen type of modalPresentationStyle.

@objc func playVideo() {
if let videoID = message?.videoData?.identifier {
let playerVC = PlayerViewController(videoID: videoID)
playerVC.modalPresentationStyle = .overFullScreen
delegate?.loadNewScreen(controller: playerVC)
}
}

The methods above present the youtube player with giving Video ID. We are using YouTubePlayerDelegate method to autoplay the video.

func playerReady(_ videoPlayer: YouTubePlayerView) {
videoPlayer.play()
}

The player can be dismissed by tapping on the light black background.

Final Output –

Resources –

  1. Youtube iOS Player API
  2. SUSI API Sample Response for Playing Video
  3. SUSI iOS Link
Continue ReadingAdding Support for Playing Youtube Videos in SUSI iOS App

Implementing Five Star Rating UI in SUSI iOS

Five-star rating system introduced in SUSI to rate skills. SUSI enable the user to rate skills between 1 to 5 star. The five-star rating system is the best way to get feedback from the user. It also helps the developer for further development. Ratings help to better understand individual preferences and present a more personalized user experience. The user feedback helps products understand whether or not the content is valuable and improve offerings over time. This can benefit products with and without sophisticated personalization.

Let’s see how the five-star rating system is implemented in SUSI iOS.

Average ratings displayed near the Try It button – It shows the average rating of a particular skill.

Enable user to submit the rating of any skill between 1-star to 5-star.

The only logged-in user can submit the ratings for skills.

Rating chart that display number of rating for each star (1 to 5), the right labels of chart bars shows the number of users rated for a particular star with the percentage.

Average and total ratings for particular skills is also displayed near the bar chart.

Thumbs-up and thumbs-down ratings removed from the skill detail screen and replaced with 5-star ratings.

Implementation of Rating Chart

For the rating chart, we are using TEAChart class, which enable us to present rating data on bar charts.

Setting colors for bar chart:

We are using Google’s Material Design color for rating bars colors.

let barChartColors = [
UIColor.fiveStarRating(),
UIColor.fourStarRating(),
UIColor.threeStarRating(),
UIColor.twoStarRating(),
UIColor.oneStarRating()
]

Assigning colors to bars:

barChartView.barColors = barChartColors

Assign Data to the bars:

// Sample data
barChartView.data = [5, 1, 1, 1, 2]

Set background color and bar spacing:

barChartView.barSpacing = 3
barChartView.backgroundColor = UIColor.barBackgroundColor()

Final Output –

Resources –

  1. Material Design: https://material.io/design/
  2. SUSI iOS Link: https://github.com/fossasia/susi_iOS
Continue ReadingImplementing Five Star Rating UI in SUSI iOS

Allowing user to submit ratings for skills in SUSI iOS

Rating is a great way to get feedback from the user. Generally, the 5-Star rating system used to get feedback. The Five-Star Quality Rating System was developed as an easy-to-understand rating system for users.

In SUSI iOS we are using the Star Rating field to get feedback of SUSI skills. We enable the user to rate the skills from one to five star. In the rating submission, we get the number of stars user picked.

The stars help show if you would recommend the skill to others. The values start at 1 (being the lowest) and go to 5 (being the highest), as seen below –

Server-Side Implementation –

fiveStarRatings API is using to submit users rating. Whenever the user taps the star fiveStarRatings being called:

func submitRating(_ params: [String: AnyObject], _ completion: @escaping(_ updatedRatings: Ratings?, _ success: Bool, _ error: String?) -> Void) {
let url = getApiUrl(UserDefaults.standard.object(forKey: ControllerConstants.UserDefaultsKeys.ipAddress) as! String, Methods.fiveStarRateSkill)
_ = makeRequest(url, .get, [:], parameters: params, completion: { (results, message) in
// handle request
....
})
}

The following params we send in the request:

  • Model
  • Group
  • Language
  • Skill
  • Stars
  • Access Token

After successfully rating submission we get updated ratings for each star of the particular skill. The following response we get from the server after successfully submitted rating:

{
"ratings": {
"one_star": 0,
"four_star": 1,
"five_star": 0,
"total_star": 1,
"three_star": 0,
"avg_star": 4,
"two_star": 0
},
"session": {"identity": {
"type": "email",
"name": "abc@a.com",
"anonymous": false
}},
"accepted": true,
"message": "Skill ratings updated"
}

We use ratings object from fiveStarRatings API to update the ratings displayed on charts and labels and also, we use ratings object to update Skill model which we passed from SkillListingController to SkillDetailController so the user can see updated rating chart when clicking back to skill.

func updateSkill(with ratings: Ratings) {..}

If the user already submitted ratings for a skill, we are using getRatingByUser API with same params as in fiveStarRatings except ratings to get already submitted user rating and we display that ratings as initial ratings in UI.

Implementation of Submit Rating UI –

RatingView is behind the submit rating UI. FloatRatingViewDelegate protocol is implemented to get ratings is updating or get updated.

@objc public protocol FloatRatingViewDelegate {
/// Returns the rating value when touch events end
@objc optional func floatRatingView(_ ratingView: RatingView, didUpdate rating: Double)

/// Returns the rating value as the user pans
@objc optional func floatRatingView(_ ratingView: RatingView, isUpdating rating: Double)
}

Rating updated on rating display chart:

Now see, how we handle the case when Skill is not rated before and the user first time rate the skill.

There is a label that shows the “Skill not rated yet” when a skill is not rated. When the user rates the skill, the label is hidden and chart bar and the label is shown.

if self.ratingsBackViewHeightConstraint.constant == 0 {
self.ratingsBackViewHeightConstraint.constant = 128.0
self.contentType.topAnchor.constraint(equalTo: self.ratingBackView.bottomAnchor, constant: 16).isActive = true
self.ratingsBackStackView.isHidden = false
self.topAvgRatingStackView.isHidden = false
self.notRatedLabel.isHidden = true
}

 

Resources –

  1. Swift Protocol: https://docs.swift.org/swift-book/LanguageGuide/Protocols.html
  2. SUSI Skills: https://skills.susi.ai/
  3. SUSI Server Link: https://github.com/fossasia/susi_server
  4. SUSI iOS Link: https://github.com/fossasia/susi_iOS
Continue ReadingAllowing user to submit ratings for skills in SUSI iOS

Change Text-to-Speech Voice Language of SUSI in SUSI iOS

SUSI iOS app now enables the user to change the text-to-speech voice language within the app. Now, the user can select any language of their choice from the list of 37 languages list. To change the text-to-speech voice language, go to, Settings > Change SUSI’s Voice, choose the language of your choice. Let see here how this feature implemented.

Apple’s AVFoundation API is used to implement the text-to-speech feature in SUSI iOS. AVFoundation API offers 37 voice languages which can be used for text-to-speech voice accent. AVFoundation’s AVSpeechSynthesisVoice API can be used to select a voice appropriate to the language of the text to be spoken or to select a voice exhibiting a particular local variant of that language (such as Australian or South African English).

To print the list of all languages offered by AVFoundation:

import AVFoundation

print(AVSpeechSynthesisVoice.speechVoices())

Or the complete list of supported languages can be found at Languages Supported by VoiceOver.

When the user clicks Change SUSI’s voice in settings, a screen is presented with the list of available languages with the language code.

Dictionary holds the list of available languages with language name and language code and used as Data Source for tableView.

var voiceLanguagesList: [Dictionary<String, String>] = []

When user choose the language and click on done, we store language chosen by user in UserDefaults:

UserDefaults.standard.set(voiceLanguagesList[selectedVoiceLanguage][ControllerConstants.ChooseLanguage.languageCode], forKey: ControllerConstants.UserDefaultsKeys.languageCode)
UserDefaults.standard.set(voiceLanguagesList[selectedVoiceLanguage][ControllerConstants.ChooseLanguage.languageName], forKey: ControllerConstants.UserDefaultsKeys.languageName)

Language name with language code chosen by user displayed in settings so the user can know which language is currently being used for text-to-speech voice.

To select a voice for use in speech, we obtain an AVSpeechSynthesisVoice instance using one of the methods in Finding Voices and then set it as the value of the voice property on an AVSpeechUtterance instance containing text to be spoken.

Earlier stored language code in UserDefaults shared instance used for setting the text-to-speech language for AVSpeechSynthesisVoice.

if let selectedLanguage = UserDefaults.standard.object(forKey: ControllerConstants.UserDefaultsKeys.languageCode) as? String {
speechUtterance.voice = AVSpeechSynthesisVoice(language: selectedLanguage)
}

AVSpeechUtterance is responsible for a chunk of text to be spoken, along with parameters that affect its speech.

Resources –

  1. UserDefaults: https://developer.apple.com/documentation/foundation/userdefaults
  2. AVSpeechSynthesisVoice: https://developer.apple.com/documentation/avfoundation/avspeechsynthesisvoice
  3. AVFoundation: https://developer.apple.com/av-foundation/
  4. SUSI iOS Link: https://github.com/fossasia/susi_iOS
Continue ReadingChange Text-to-Speech Voice Language of SUSI in SUSI iOS

STOP action implementation in SUSI iOS

You may have experienced, you can stop Google home or Amazon Alexa during the ongoing task. The same feature is available for SUSI too. Now, SUSI can respond to ‘stop’ action and stop ongoing tasks (e.g. SUSI is narrating a story and if the user says STOP, it stops narrating the story). ‘stop’ action is introduced to enable the user to make SUSI stop anything it’s doing.

Video demonstration of how stop action work on SUSI iOS App can be found here.

Stop action is implemented on SUSI iOS, Web chat, and Android. Here we will see how it is implemented in SUSI iOS.

When you ask SUSI to stop, you get following actions object from server side:

"actions": [{"type": "stop"}]

Full JSON response can be found here.

When SUSI respond with ‘stop’ action, we create a new action type ‘stop’ and assign `Message` object `actionType` to ‘stop’.

Adding ‘stop’ to action type:

enum ActionType: String {
... // other action types
case stop
}

Assigning to the message object:

if type == ActionType.stop.rawValue {
message.actionType = ActionType.stop.rawValue
message.message = ControllerConstants.stopMessage
message.answerData = AnswerAction(action: action)
}

A new collectionView cell is created to respond user with “stoped” text.

Registering the stopCell:

collectionView?.register(StopCell.self, forCellWithReuseIdentifier: ControllerConstants.stopCell)

Add cell to the chat screen:

if message.actionType == ActionType.stop.rawValue {
if let cell = collectionView.dequeueReusableCell(withReuseIdentifier: ControllerConstants.stopCell, for: indexPath) as? StopCell {
cell.message = message
let message = ControllerConstants.stopMessage
let estimatedFrame = self.estimatedFrame(message: message)
cell.setupCell(estimatedFrame, view.frame)
return cell
}
}

AVFoundation’s AVSpeechSynthesizer API is used to stop the action:

func stopSpeakAction() {
speechSynthesizer.stopSpeaking(at: AVSpeechBoundary.immediate)
}

This method immediately stops the speak action.

Final Output:

Resources – 

  1. About SUSI: https://chat.susi.ai/overview
  2. JSON response for ‘stop’ action: https://api.susi.ai/susi/chat.json?timezoneOffset=-330&q=susi+stop
  3. AVSpeechSynthesisVoice: https://developer.apple.com/documentation/avfoundation/avspeechsynthesisvoice
  4. AVFoundation: https://developer.apple.com/av-foundation/
  5. SUSI iOS Link: https://github.com/fossasia/susi_iOS
  6. SUSI Android Link: https://github.com/fossasia/susi_android
  7. SUSI Web Chat Link: https://chat.susi.ai/
Continue ReadingSTOP action implementation in SUSI iOS

Creating Onboarding Screens for SUSI iOS

Onboarding screens are designed to introduce users to how the application works and what main functions it has, to help them understand how to use it. It can also be helpful for developers who intend to extend the current project.

When you enter in the SUSI iOS app for the first time, you see the onboarding screen displaying information about SUSI iOS features. SUSI iOS is using Material design so the UI of Onboarding screens are following the Material design.

There are four onboarding screens:

  1. Login (Showing the login features of SUSI iOS) – Login to the app using SUSI.AI account or else signup to create a new account or just skip login.
  2. Chat Interface (Showing the chat screen of SUSI iOS) – Interact with SUSI.AI asking queries. Use microphone button for voice interaction.
  3. SUSI Skill (Showing SUSI Skills features) – Browse and try your favorite SUSI.AI Skill.
  4. Chat Settings (SUSI iOS Chat Settings) – Personalize your chat settings for the better experience.

Onboarding Screens User Interface

 

There are three important components of every onboarding screen:

  1. Title – Title of the screen (Login, Chat Interface etc).
  2. Image – Showing the visual presentation of SUSI iOS features.
  3. Description – Small descriptions of features.

Onboarding screen user control:

  • Pagination – Give the ability to the user to go next and previous onboarding screen.
  • Swiping – Left and Right swipe are implemented to enable the user to go to next and previous onboarding screen.
  • Skip Button – Enable users to skip the onboarding instructions and go directly to the login screen.

Implementation of Onboarding Screens:

  • Initializing PaperOnboarding:
override func viewDidLoad() {
super.viewDidLoad()

UIApplication.shared.statusBarStyle = .lightContent
view.accessibilityIdentifier = "onboardingView"

setupPaperOnboardingView()
skipButton.isHidden = false
bottomLoginSkipButton.isHidden = true
view.bringSubview(toFront: skipButton)
view.bringSubview(toFront: bottomLoginSkipButton)
}

private func setupPaperOnboardingView() {
let onboarding = PaperOnboarding()
onboarding.delegate = self
onboarding.dataSource = self
onboarding.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(onboarding)

// Add constraints
for attribute: NSLayoutAttribute in [.left, .right, .top, .bottom] {
let constraint = NSLayoutConstraint(item: onboarding,
attribute: attribute,
relatedBy: .equal,
toItem: view,
attribute: attribute,
multiplier: 1,
constant: 0)
view.addConstraint(constraint)
}
}

 

  • Adding content using dataSource methods:

    let items = [
    OnboardingItemInfo(informationImage: Asset.login.image,
    title: ControllerConstants.Onboarding.login,
    description: ControllerConstants.Onboarding.loginDescription,
    pageIcon: Asset.pageIcon.image,
    color: UIColor.skillOnboardingColor(),
    titleColor: UIColor.white, descriptionColor: UIColor.white, titleFont: titleFont, descriptionFont: descriptionFont),OnboardingItemInfo(informationImage: Asset.chat.image,
    title: ControllerConstants.Onboarding.chatInterface,
    description: ControllerConstants.Onboarding.chatInterfaceDescription,
    pageIcon: Asset.pageIcon.image,
    color: UIColor.chatOnboardingColor(),
    titleColor: UIColor.white, descriptionColor: UIColor.white, titleFont: titleFont, descriptionFont: descriptionFont),OnboardingItemInfo(informationImage: Asset.skill.image,
    title: ControllerConstants.Onboarding.skillListing,
    description: ControllerConstants.Onboarding.skillListingDescription,
    pageIcon: Asset.pageIcon.image,
    color: UIColor.loginOnboardingColor(),
    titleColor: UIColor.white, descriptionColor: UIColor.white, titleFont: titleFont, descriptionFont: descriptionFont),OnboardingItemInfo(informationImage: Asset.skillSettings.image,
    title: ControllerConstants.Onboarding.chatSettings,
    description: ControllerConstants.Onboarding.chatSettingsDescription,
    pageIcon: Asset.pageIcon.image,
    color: UIColor.iOSBlue(),
    titleColor: UIColor.white, descriptionColor: UIColor.white, titleFont: titleFont, descriptionFont: descriptionFont)]
    extension OnboardingViewController: PaperOnboardingDelegate, PaperOnboardingDataSource {
    func onboardingItemsCount() -> Int {
    return items.count
    }
    
    func onboardingItem(at index: Int) -> OnboardingItemInfo {
    return items[index]
    }
    }
    

     

  • Hiding/Showing Skip Buttons:
    func onboardingWillTransitonToIndex(_ index: Int) {
    skipButton.isHidden = index == 3 ? true : false
    bottomLoginSkipButton.isHidden = index == 3 ? false : true
    }
    

Resources:

Continue ReadingCreating Onboarding Screens for SUSI iOS

Using CoreLocation in SUSI iOS

The SUSI Server responds with intelligent answers to the user’s queries. To make these answers better, the server makes use of the user’s location which is sent as a parameter to the query request each time. To implement this feature in the SUSI iOS client, we use the CoreLocation framework provided by Apple which helps us to get the user’s location coordinates and add them as a parameter to each request made.

In order to start with using the CoreLocation framework, we first import it inside the view controller.

import CoreLocation

Now, we create a variable of type CLLocationManager which will help us to use the actual functionality.

// Location Manager
var locationManager = CLLocationManager()

The location manager has some delegate methods which give an option to get the maximum accuracy for a user’s location.  To set that, we need the controller to conform to the CLLocationManagerDelegate, so we create an extension of the view controller conforming to this.

extension MainViewController: CLLocationManagerDelegate {

   // use functionality

}

Next, we set the manager delegate.

locationManager.delegate = self

And create a method to ask for using the user’s location and set the delegate properties.

func configureLocationManager() {
       locationManager.delegate = self
       if CLLocationManager.authorizationStatus() == .notDetermined || CLLocationManager.authorizationStatus() == .denied {
           self.locationManager.requestWhenInUseAuthorization()
       }

       locationManager.distanceFilter = kCLDistanceFilterNone
       locationManager.desiredAccuracy = kCLLocationAccuracyBest
}

Here, we ask for the user location if it was previously denied or is not yet determined and following that, we set the `distanceFilter` as kCLDistanceFilterNone  and `desiredAccuray` as kCLLocationAccuracyBest.. Finally, we are left with starting to update the location which we do by:

locationManager.startUpdatingLocation()

We call this method inside viewDidLoad to start updation of the location when the view first loads. The complete extension looks like below:

extension MainViewController: CLLocationManagerDelegate {

   // Configures Location Manager
   func configureLocationManager() {
       locationManager.delegate = self
       if CLLocationManager.authorizationStatus() == .notDetermined || CLLocationManager.authorizationStatus() == .denied {
           self.locationManager.requestWhenInUseAuthorization()
       }

       locationManager.distanceFilter = kCLDistanceFilterNone
       locationManager.desiredAccuracy = kCLLocationAccuracyBest
       locationManager.startUpdatingLocation()
   }

}

Now, it’s very easy to use the location manager and get the coordinates and add it to the params for each request.

if let location = locationManager.location {
   params[Client.ChatKeys.Latitude] = location.coordinate.latitude as AnyObject
   params[Client.ChatKeys.Longitude] = location.coordinate.longitude as AnyObject
}

Now the params which is a dictionary object is added to each request made so that the user get’s the most accurate results for each query he makes.

References:

Continue ReadingUsing CoreLocation in SUSI iOS

Implementing Speech To Text in SUSI iOS

SUSI being an intelligent bot has the capabilities by which the user can provide input in a hands-free mode by talking and not requiring to even lift the phone for typing. The speech to text feature is available in SUSI iOS with the help of the Speech framework which was released alongside iOS 10 which enables continuous speech detection and transcription. The detection is really fast and supports around 50 languages and dialects from Arabic to Vietnamese. The speech recognition API does its heavy tasks of detection on Apple’s servers which requires an internet connection. The same API is also not always available on all newer devices and also provides the ability to check if a specific language is supported at a particular time.

How to use the Speech to Text feature?

  • Go to the view controller and import the speech framework
  • Now, because the speech is transmitted over the internet and uses Apple’s servers for computation, we need to ask the user for permissions to use the microphone and speech recognition feature. Add the following two keys to the Info.plist file which displays alerts asking user permission to use speech recognition and for accessing the microphone. Add a specific sentence for each key string which will be displayed to the user in the alerts.
    1. NSSpeechRecognitionUsageDescription
    2. NSMicrophoneUsageDescription

The prompts appear automatically when the functionality is used in the app. Since we already have the Hot word recognition enabled, the microphone alert would show up automatically after login and the speech one shows after the microphone button is tapped.

3) To request the user for authorization for Speech Recognition, we use the method SFSpeechRecognizer.requestAuthorization.

func configureSpeechRecognizer() {
        speechRecognizer?.delegate = self

        SFSpeechRecognizer.requestAuthorization { (authStatus) in
            var isEnabled = false

            switch authStatus {
            case .authorized:
                print("Autorized speech")
                isEnabled = true
            case .denied:
                print("Denied speech")
                isEnabled = false
            case .restricted:
                print("speech restricted")
                isEnabled = false
            case .notDetermined:
                print("not determined")
                isEnabled = false
            }

            OperationQueue.main.addOperation {

                // handle button enable/disable

                self.sendButton.tag = isEnabled ? 0 : 1

                self.addTargetSendButton()
            }
        }
    }

4)   Now, we create instances of the AVAudioEngine, SFSpeechRecognizer, SFSpeechAudioBufferRecognitionRequest,SFSpeechRecognitionTask

let speechRecognizer = SFSpeechRecognizer(locale: Locale.init(identifier: "en-US"))
var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?
var recognitionTask: SFSpeechRecognitionTask?
let audioEngine = AVAudioEngine()

5)  Create a method called `readAndRecognizeSpeech`. Here, we do all the recognition related stuff. We first check if the recognitionTask is running or not and if it does we cancel the task.

if recognitionTask != nil {
  recognitionTask?.cancel()
  recognitionTask = nil
}

6)  Now, create an instance of AVAudioSession to prepare the audio recording where we set the category of the session as recording, the mode and activate it. Since these might throw an exception, they are added inside the do catch block.

let audioSession = AVAudioSession.sharedInstance()

do {

    try audioSession.setCategory(AVAudioSessionCategoryRecord)

    try audioSession.setMode(AVAudioSessionModeMeasurement)

    try audioSession.setActive(true, with: .notifyOthersOnDeactivation)

} catch {

    print("audioSession properties weren't set because of an error.")

}

7)  Instantiate the recognitionRequest.

recognitionRequest = SFSpeechAudioBufferRecognitionRequest()

8) Check if the device has an audio input else throw an error.

guard let inputNode = audioEngine.inputNode else {

fatalError("Audio engine has no input node")

}

9)  Enable recognitionRequest to report partial results and start the recognitionTask.

recognitionRequest.shouldReportPartialResults = true

recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in

  var isFinal = false // to indicate if final result

  if result != nil {

    self.inputTextView.text = result?.bestTranscription.formattedString

    isFinal = (result?.isFinal)!

  }

  if error != nil || isFinal {

    self.audioEngine.stop()

    inputNode.removeTap(onBus: 0)

    self.recognitionRequest = nil

    self.recognitionTask = nil

  }
})

10) Next, we start with writing the method that performs the actual speech recognition. This will record and process the speech continuously.

  • First, we create a singleton for the incoming audio using .inputNode
  • .installTap configures the node and sets up the buffer size and the format
let recordingFormat = inputNode.outputFormat(forBus: 0)

inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, _) in

    self.recognitionRequest?.append(buffer)

}

11)  Next, we prepare and start the audio engine.

audioEngine.prepare()

do {

  try audioEngine.start()

} catch {

  print("audioEngine couldn't start because of an error.")

}

12)  Create a method that stops the Speech recognition.

func stopSTT() {

    print("audioEngine stopped")

    audioEngine.inputNode?.removeTap(onBus: 0)

    audioEngine.stop()

    recognitionRequest?.endAudio()

    indicatorView.removeFromSuperview()



    if inputTextView.text.isEmpty {

        self.sendButton.setImage(UIImage(named: ControllerConstants.mic), for: .normal)

    } else {

        self.sendButton.setImage(UIImage(named: ControllerConstants.send), for: .normal)

    }

        self.inputTextView.isUserInteractionEnabled = true
}

13)  Update the view when the speech recognition is running indicating the user its status. Add below code just below audio engine preparation.

// Listening indicator swift

self.indicatorView.frame = self.sendButton.frame

self.indicatorView.isUserInteractionEnabled = true

let gesture: UITapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(startSTT))

gesture.numberOfTapsRequired = 1

self.indicatorView.addGestureRecognizer(gesture)
self.sendButton.setImage(UIImage(), for: .normal)

indicatorView.startAnimating()

self.sendButton.addSubview(indicatorView)

self.sendButton.addConstraintsWithFormat(format: "V:|[v0(24)]|", views: indicatorView)

self.sendButton.addConstraintsWithFormat(format: "H:|[v0(24)]|", views: indicatorView)

self.inputTextView.isUserInteractionEnabled = false

The screenshot of the implementation is below:

       

References

Continue ReadingImplementing Speech To Text in SUSI iOS