Welcome the Visdom Project at FOSSASIA Now Fully Open Source

We are proud to announce that FOSSASIA is welcoming the Visdom project. The project is being transitioned from Facebook AI Research to the FOSSASIA Organization. As part of this transition it has been relicensed to the Apache License 2.0 as fully Open Source.

Visdom is a flexible tool for creating, organizing, and sharing visualizations of live, rich data. It aims to facilitate visualization of (remote) data with an emphasis on supporting scientific experimentation. It supports PyTorch and Numpy

Visdom was created in 2017 by Allan Jabri and Laurens van der Maaten of Facebook AI Research, and further developed under the leadership of Jack Urbanek. To date, 90 developers from around the world have contributed to the project with over 3000 projects depending on Visdom. It is now available on the FOSSASIA GitHub.

“I’m excited to see how Visdom continues to grow as a FOSSASIA project, as the community will set a new vision for what we all want out of it. While I’ll no longer be leading the project, I will remain engaged to provide clear context for transitions, code reviews, and direct code contributions.”

Jack Urbanek, Facebook Research Engineer and Visdom project lead

“My goal continues to be building amazing communities around state of the art AI rooted in open source collaboration. Bringing the Visdom project to FOSSASIA is a great example of this and I am extremely pleased to see the project continue this path with FOSSASIA as the new host of Visdom.”

Joe Spisak, Product Manager for Facebook’s open-source AI platform and PyTorch

FOSSASIA has been developing Open Source software applications and Open Hardware together with a global community from its base in Asia since 2009. FOSSASIA’s goal is to provide access to open technologies, science applications and knowledge that improve people’s lives stating in its mission: “We want to enable people to adapt and change technology according to their own ideas and needs and validate science and knowledge through an Open Access approach.” 

This mission perfectly aligns with the goals of Visdom as an Open Source tool that aims to:

  • Facilitate visualization of data with an emphasis on supporting scientific experimentation and 
  • Organize a visualization space programmatically or through the UI to create dashboards for live data, inspect results of experiments, or debug experimental code.

Hong Phuc Dang, OSI vice president and FOSSASIA founder says:

“We will continue the development of Visdom in cooperation with the developer and user community. We already discussed lots of ideas to move forward on an exciting roadmap with the core team and adding it to FOSSASIA’s Pocket Science Lab applications. We are looking forward to the input and involvement of the community to bring the project to the next level.”

Mario Behling, co-founder of FOSSASIA and CEO of OpnTec adds:

“We are thrilled that the Visdom project becomes fully Open Source as part of the project transition. It is fantastic to see how Facebook supports open technologies and takes an active role to foster International cooperation and development in the FOSS ecosystem by making this transition. I would like to thank Jack Urbanek who worked so hard on the project for years as well as the project creators Allan Jabri and Laurens van der Maaten, Joe Spisak whose role was essential in making this transition happen and the entire Facebook AI team.”

What are the plans for Visdom at FOSSASIA?

The short term plans are to establish all project channels to enable the developer community to actively participate in the project. On the technical side of things we are in the process of making the continuous integration work on the new deployment and adding tests and update checks. Visdom is also joining the Codeheat Coding Contest as a participating project. The contest runs until 30th June 2021.

Where is the list of issues?

The issue tracker is available on the code repository here: https://github.com/fossasia/visdom/ 

How can developers and users communicate?

Apart from adding issues in the issue tracker we invite you to join us on a dedicated chat channel here: https://gitter.im/fossasia/visdom. Login with GitHub, Gitlab or Twitter is required.

Where else is information about the next steps and the roadmap?

For technical discussions the issue tracker is the best place. To stay up to date about general project developments please follow us on:

Continue ReadingWelcome the Visdom Project at FOSSASIA Now Fully Open Source

Connecting SUSI iOS App to SUSI Smart Speaker

SUSI Smart Speaker is an Open Source speaker with many exciting features. The user needs an Android or iOS device to set up the speaker. You can refer this post for initial connection to SUSI Smart Speaker. In this post, we will see how a user can connect SUSI Smart Speaker to iOS devices (iPhone/iPad).

Implementation –

The first step is to detect whether an iOS device connects to SUSI.AI hotspot or not. For this, we match the currently connected wifi SSID with SUSI.AI hotspot SSID. If it matches, we show the connected device in Device Activity to proceed further with setups.

Choosing Room –

Room name is basically the location of your SUSI Smart Speaker in the home. You may have multiple SUSI Smart Speaker in different rooms, so the purpose of adding the room is to differentiate between them.

When the user clicks on Wi-Fi displayed cell, it starts the initial setups. We are using didSelectRowAt method of UITableViewDelegate to get which cell is selected. On clicking the displayed Wi-Fi cell, a popup is open with a Room Location Text field.

override func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) {
if indexPath.row == 0, let speakerSSID = fetchSSIDInfo(), speakerSSID == ControllerConstants.DeviceActivity.susiSSID {
// Open a popup to select Rooms
presentRoomsPopup()
}
}

When the user clicks the Next button, we send the speaker room location to the local server of the speaker by the following API endpoint with room name as a parameter:

http://10.0.0.1:5000/speaker_config/

Refer this post for getting more detail about how choosing room work and how it is implemented in SUSI iOS.

Sharing Wi-Fi Credentials –

On successfully choosing the room, we present a popup that asks the user to enter the Wi-Fi credentials of previously connected Wi-Fi so that we can connect our Smart Speaker to the wifi which can provide internet connection to play music and set commands over the speaker.

We present a popup with a text field for entering wifi password.

When the user clicks the Next button, we share the wifi credentials to wifi by the following API endpoint:

http://10.0.0.1:5000/wifi_credentials/

With the following params-

  1. Wifissid – Connected Wi-Fi SSID
  2. Wifipassd – Connected Wi-Fi password

In this API endpoint, we are sharing wifi SSID and wifi password with Smart Speaker. If the credentials successfully accepted by speaker than we present a popup for user SUSI account password, otherwise we again present Enter Wifi Credentials popup.

Client.sharedInstance.sendWifiCredentials(params) { (success, message) in
DispatchQueue.main.async {
self.alertController.dismiss(animated: true, completion: nil)
if success {
self.presentUserPasswordPopup()
} else {
self.view.makeToast("", point: self.view.center, title: message, image: nil, completion: { didTap in
UIApplication.shared.endIgnoringInteractionEvents()
self.presentWifiCredentialsPopup()
})
}
}
}

 

Sharing SUSI Account Credentials –

In the method above we have seen that when SUSI Smart Speaker accept the wifi credentials, we proceed further with SUSI account credentials. We open a popup to Enter user’s SUSI account password:

When the user clicks the Next button, we use following API endpoint to share user’s SUSI account credentials to SUSI Smart Speaker:

http://10.0.0.1:5000/auth/

With the following params-

  1. email
  2. password

User email is already saved in the device so the user doesn’t have to type it again. If the user credentials successfully accepted by speaker then we proceed with configuration process otherwise we open up Enter Password popup again.

Client.sharedInstance.sendAuthCredentials(params) { (success, message) in
DispatchQueue.main.async {
self.alertController.dismiss(animated: true, completion: nil)
if success {
self.setConfiguration()
} else {
self.view.makeToast("", point: self.view.center, title: message, image: nil, completion: { didTap in
UIApplication.shared.endIgnoringInteractionEvents()
self.presentUserPasswordPopup()
})
}
}
}

 

Setting Configuration –

After successfully sharing SUSI account credentials, following API endpoint is using for setting configuration.

http://10.0.0.1:5000/config/

With the following params-

  1. sst
  2. tts
  3. hotword
  4. wake

The success of this API call makes successfully connection between user iOS Device and SUSI Smart Speaker.

Client.sharedInstance.setConfiguration(params) { (success, message) in
DispatchQueue.main.async {
if success {
// Successfully Configured
self.isSetupDone = true
self.view.makeToast(ControllerConstants.DeviceActivity.doneSetupDetailText)
} else {
self.view.makeToast("", point: self.view.center, title: message, image: nil, completion: { didTap in
UIApplication.shared.endIgnoringInteractionEvents()
})
}
}
}

After successful connection-

 

Resources –

  1. Apple’s Documentation of tableView(_:didSelectRowAt:) API
  2. Initial Setups for Connecting SUSI Smart Speaker with iPhone/iPad
  3. SUSI Linux Link: https://github.com/fossasia/susi_linux
  4. Adding Option to Choose Room for SUSI Smart Speaker in iOS App
Continue ReadingConnecting SUSI iOS App to SUSI Smart Speaker

Adding Support for Playing Youtube Videos in SUSI iOS App

SUSI supports very exciting features in chat screen, from simple answer type to complex map, RSS, table etc type responses. Even user can ask SUSI for the image of anything and SUSI response with the image in the chat screen. What if we can play the youtube video from SUSI, we ask SUSI for playing videos and it can play youtube videos, isn’t it be exciting? Yes, SUSI can play youtube videos too. All the SUSI clients (iOS, Android, and Web) support playing youtube videos in chat.

Google provides a Youtube iFrame Player API that can be used to play videos inside the app only instead of passing an intent and playing the videos in the youtube app. iFrame API provide support for playing youtube videos in iOS applications.

In this post, we will see how playing youtube video features implemented in SUSI iOS.

Getting response from server side –

When we ask SUSI for playing any video, in response, we get youtube Video ID in video_play action type. SUSI iOS make use of Video ID to play youtube video. In response below, you can see that we are getting answer action type and in the expression of answer action type, we get the title of the video.

actions:
[
{
type: "answer",
expression: "Playing Kygo - Firestone (Official Video) ft. Conrad Sewell"
},
{
identifier: "9Sc-ir2UwGU",
identifier_type: "youtube",
type: "video_play"
}
]

Integrating youtube player in the app –

We have a VideoPlayerView that handle all the iFrame API methods and player events with help of YTPlayer HTML file.

When SUSI respond with video_play action, the first step is to register the YouTubePlayerCell and present the cell in collectionView of chat screen.

Registering the Cell –

register(_:forCellWithReuseIdentifier:) method registers a class for use in creating new collection view cells.

collectionView?.register(YouTubePlayerCell.self, forCellWithReuseIdentifier: ControllerConstants.youtubePlayerCell)

 

Presenting the YouTubePlayerCell –

Here we are presenting the cell in chat screen using cellForItemAt method of UICollectionView.

if message.actionType == ActionType.video_play.rawValue {
if let cell = collectionView.dequeueReusableCell(withReuseIdentifier: ControllerConstants.youtubePlayerCell, for: indexPath) as? YouTubePlayerCell {
cell.message = message
cell.delegate = self
return cell
}
}

 

Setting size for cell –

Using sizeForItemAt method of UICollectionView to set the size.

if message.actionType == ActionType.video_play.rawValue {
return CGSize(width: view.frame.width, height: 158)
}

In YouTubePlayerCell, we are displaying the thumbnail of youtube video using UIImageView. Following method is using to get the thumbnail of particular video by using Video ID –

  1. Getting thumbnail image from URL
  2. Setting image to imageView
func downloadThumbnail() {
if let videoID = message?.videoData?.identifier {
let thumbnailURLString = "https://img.youtube.com/vi/\(videoID)/default.jpg"
let thumbnailURL = URL(string: thumbnailURLString)
thumbnailView.kf.setImage(with: thumbnailURL, placeholder: ControllerConstants.Images.placeholder, options: nil, progressBlock: nil, completionHandler: nil)
}
}

We are adding a play button in the center of thumbnail view so that when the user clicks play button, we can present player.

On clicking the Play button, we are presenting the PlayerViewController, which hold all the player setups, by overFullScreen type of modalPresentationStyle.

@objc func playVideo() {
if let videoID = message?.videoData?.identifier {
let playerVC = PlayerViewController(videoID: videoID)
playerVC.modalPresentationStyle = .overFullScreen
delegate?.loadNewScreen(controller: playerVC)
}
}

The methods above present the youtube player with giving Video ID. We are using YouTubePlayerDelegate method to autoplay the video.

func playerReady(_ videoPlayer: YouTubePlayerView) {
videoPlayer.play()
}

The player can be dismissed by tapping on the light black background.

Final Output –

Resources –

  1. Youtube iOS Player API
  2. SUSI API Sample Response for Playing Video
  3. SUSI iOS Link
Continue ReadingAdding Support for Playing Youtube Videos in SUSI iOS App

Initial Setups for Connecting SUSI Smart Speaker with iPhone/iPad

You may have experienced Apple HomPad, Google Home, Alexa etc or read about smart speakers that offer interactive action over voice commands. The smart speaker uses the hot word for activation. They utilize Wi-Fi, Bluetooth, and other wireless protocols.

SUSI.AI is also coming with Open Source smart speaker that can do various actions like playing music etc over voice commands. To use SUSI Smart Speaker, you have to connect it to the SUSI iOS or Android App. You can manage your connected devices in SUSI iOS, Android and Web clients. Here we will see initial setups for connecting SUSI Smart Speaker with iPhone/iPad (iOS Devices).

You may aware that iOS does not allow connecting to wifi within the app. To connect to a particular Wi-Fi, you have to go to phone settings, from there you can connect to Wi-Fi. SUSI Smart Speaker create a temporary Hotspot for initial setups. Follow the instruction below to connect to SUSI Smart Speaker hotspot –

  1. Tap to Home button, and go to your iPhone Settings > Wi-Fi
  2. Connect to the Wi-Fi hotspot for the device that you are setting up. It will have name “susi.ai”, like in the image below
  3. Come back to the SUSI app to proceed with setup.

These instruction is also available within the app when you are not connected to SUSI Smart Speaker hotspot and click `Setup a Device` or plus icon on Device Activity screen navigation bar.

Devices Activity and getting current Wi-Fi SSID:

Devices section in Settings screen shows the currently connected device. In Devices Activity screen, the user can manage the connected device. Only a logged-in user can access Devices Activity. When the user clicks on Device Accessories in setting, if the user is not logged-in, an alert is prompted with Login option. By clicking Login option, user directed to Login screen where the user can log in and come back to device section to proceed further.

If the user is already logged-in, Device Activity screen is presented. We use following method to scan if iPhone/iPad is connected to SUSI Smart Speaker:

func fetchSSIDInfo() -> String? {
var ssid: String?
if let interfaces = CNCopySupportedInterfaces() as? [String] {
for interface in interfaces {
if let interfaceInfo = CNCopyCurrentNetworkInfo(interface as CFString) as NSDictionary? {
ssid = interfaceInfo[kCNNetworkInfoKeySSID as String] as? String
break
}
}
}
return ssid
}

Apple’s SystemConfiguration API is used to get current Wi-Fi SSID. SystemConfiguration Allow applications to access a device’s network configuration settings. Determine the reachability of the device, such as whether Wi-Fi or cell connectivity is active.

import SystemConfiguration.CaptiveNetwork

The method above return the SSID of your device current Wi-Fi. SSID is simply the technical term for a network name. When you set up a wireless home network, you give it a name to distinguish it from other networks in your neighborhood. You’ll see this name when you connect your device to your wireless network.

If current Wi-Fi match with SUSI Smart Speaker hotspot, we display device in TableView, if not we display “No device connected yet”.

if let speakerSSID = fetchSSIDInfo(), speakerSSID == "susi.ai" {
cell.accessoryType = .disclosureIndicator
cell.textLabel?.text = speakerSSID
} else {
cell.accessoryType = .none
cell.textLabel?.text = "No device connected yet"
}

SUSI Smart Speaker is coming with very exciting features. Stay tuned.

Resources –

  1. SUSI iOS Link: https://github.com/fossasia/susi_iOS
  2. Apple’s SystemConfiguration Framework Documentation
  3. Bell’s article on What Do SSID and WPA2 mean
Continue ReadingInitial Setups for Connecting SUSI Smart Speaker with iPhone/iPad

Skill Development using SUSI Skill CMS

There are a lot of personal assistants around like Google Assistant, Apple’s Siri, Windows’ Cortana, Amazon’s Alexa, etc. What is then special about SUSI.AI which makes it stand apart from all the different assistants in the world? SUSI is different as it gives users the ability to create their own skills in a Wiki-like system. You don’t need to be a developer to be able to enhance SUSI. And, SUSI is an Open Source personal assistant which can do a lot of incredible stuff for you, made by you.

So, let’s say you want to create your own Skill and add it to the existing SUSI Skills. So, these are the steps you need to follow regarding the same –

  1. The current SUSI Skill Development Environment is based on an Etherpad. An Etherpad is a web-based collaborative real-time editor. https://dream.susi.ai/ is one such Etherpad. Open https://dream.susi.ai/ and name your dream (in lowercase letters).
  2. Define your skill in the Etherpad. The general skill format is

::name <Skill_name>
::author <author_name>
::author_url <author_url>
::description <description> 
::dynamic_content <Yes/No>
::developer_privacy_policy <link>
::image <image_url>
::term_of_use <link>

#Intent
User query1|query2|query3....
Answer answer1|answer2|answer3...

 

Patterns in query can be learned easily via this tutorial.

  1. Open any SUSI Client and then write dream <your dream name> so that dreaming is enabled for SUSI. Once dreaming is enabled, you can now test any skills which you’ve made in your Etherpad.
  2. Once you’ve tested your skill, write ‘stop dreaming’ to disable dreaming for SUSI.
  3. If the testing was successful and you want your skill to be added to SUSI Skills, send a Pull Request to susi_skill_data repository providing your dream name.

How do you modify an existing skill?

SUSI Skill CMS is a web interface where you can modify the skills you’ve made. All the skills of SUSI are directly in sync with the susi_skill_data.

To edit any skill, you need to follow these steps –

  1. Login to SUSI Skill CMS website using your email and password (or Sign Up to the website if you haven’t already).
  2. Click on the skill which you want to edit and then click on the “edit” icon.
  3. You can edit all aspects of the skill in the next state. Below is a preview:

Make the changes and then click on “SAVE” button to save the skill.

What’s happening Behind The Scenes in the EDIT process?

  • SkillEditor.js is the file which is responsible for keeping a check over various validations in the Skill Editing process. There are certain validations that need to be made in the process. Those are as follows –
  • Check whether User has logged in or not

if (!cookies.get('loggedIn')) {
            notification.open({
                message: 'Not logged In',
                description: 'Please login and then try to create/edit a skill',
                icon: <Icon type='close-circle' style={{ color: '#f44336' }} />,
            });
            this.setState({
                loading: false
            });
            return 0;
        }

 

  • Check whether Commit Message has been entered by User or not

if (this.state.commitMessage === null) {
            notification.open({
                message: 'Please add a commit message',
                icon: <Icon type='close-circle' style={{ color: '#f44336' }} />,
            });

            this.setState({
                loading: false
            });
            return 0;
        }

 

  • Check to ensure that request is sent only if there are some differences in old values and new values

if (this.state.oldGroupValue === this.state.groupValue &&
          this.state.oldExpertValue === this.state.expertValue &&
          this.state.oldLanguageValue === this.state.languageValue &&
          !this.state.codeChanged && !this.state.image_name_changed) {
            notification.open({
                message: 'Please make some changes to save the Skill',
                icon: <Icon type='close-circle' style={{ color: '#f44336' }} />,
            });
            self.setState({
                loading: false
            });
            return 0;
        }

 

  • After doing the above validations, a request is sent to the Server and the User is shown a notification accordingly, whether the Skill has been uploaded to the Server or there has been some error.

$.ajax(settings)
            .done(function (response) {
                this.setState({
                    loading: false
                });
                let data = JSON.parse(response);
                if (data.accepted === true) {
                    notification.open({
                        message: 'Accepted',
                        description: 'Your Skill has been uploaded to the server',
                        //success/>
                    });
                }
                else {
                    this.setState({
                        loading: false
                    });
                    notification.open({
                        message: 'Error Processing your Request',
                        description: String(data.message),
                        //failure />
                    });
                }
            }

 

  • If the User is notified with a Success notification, then to verify whether the Skill has been added or not, the User can go to susi_skill_data repo and see if he has a recent commit regarding the same or not.

Resources

Continue ReadingSkill Development using SUSI Skill CMS

Creating Onboarding Screens for SUSI iOS

Onboarding screens are designed to introduce users to how the application works and what main functions it has, to help them understand how to use it. It can also be helpful for developers who intend to extend the current project.

When you enter in the SUSI iOS app for the first time, you see the onboarding screen displaying information about SUSI iOS features. SUSI iOS is using Material design so the UI of Onboarding screens are following the Material design.

There are four onboarding screens:

  1. Login (Showing the login features of SUSI iOS) – Login to the app using SUSI.AI account or else signup to create a new account or just skip login.
  2. Chat Interface (Showing the chat screen of SUSI iOS) – Interact with SUSI.AI asking queries. Use microphone button for voice interaction.
  3. SUSI Skill (Showing SUSI Skills features) – Browse and try your favorite SUSI.AI Skill.
  4. Chat Settings (SUSI iOS Chat Settings) – Personalize your chat settings for the better experience.

Onboarding Screens User Interface

 

There are three important components of every onboarding screen:

  1. Title – Title of the screen (Login, Chat Interface etc).
  2. Image – Showing the visual presentation of SUSI iOS features.
  3. Description – Small descriptions of features.

Onboarding screen user control:

  • Pagination – Give the ability to the user to go next and previous onboarding screen.
  • Swiping – Left and Right swipe are implemented to enable the user to go to next and previous onboarding screen.
  • Skip Button – Enable users to skip the onboarding instructions and go directly to the login screen.

Implementation of Onboarding Screens:

  • Initializing PaperOnboarding:
override func viewDidLoad() {
super.viewDidLoad()

UIApplication.shared.statusBarStyle = .lightContent
view.accessibilityIdentifier = "onboardingView"

setupPaperOnboardingView()
skipButton.isHidden = false
bottomLoginSkipButton.isHidden = true
view.bringSubview(toFront: skipButton)
view.bringSubview(toFront: bottomLoginSkipButton)
}

private func setupPaperOnboardingView() {
let onboarding = PaperOnboarding()
onboarding.delegate = self
onboarding.dataSource = self
onboarding.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(onboarding)

// Add constraints
for attribute: NSLayoutAttribute in [.left, .right, .top, .bottom] {
let constraint = NSLayoutConstraint(item: onboarding,
attribute: attribute,
relatedBy: .equal,
toItem: view,
attribute: attribute,
multiplier: 1,
constant: 0)
view.addConstraint(constraint)
}
}

 

  • Adding content using dataSource methods:

    let items = [
    OnboardingItemInfo(informationImage: Asset.login.image,
    title: ControllerConstants.Onboarding.login,
    description: ControllerConstants.Onboarding.loginDescription,
    pageIcon: Asset.pageIcon.image,
    color: UIColor.skillOnboardingColor(),
    titleColor: UIColor.white, descriptionColor: UIColor.white, titleFont: titleFont, descriptionFont: descriptionFont),OnboardingItemInfo(informationImage: Asset.chat.image,
    title: ControllerConstants.Onboarding.chatInterface,
    description: ControllerConstants.Onboarding.chatInterfaceDescription,
    pageIcon: Asset.pageIcon.image,
    color: UIColor.chatOnboardingColor(),
    titleColor: UIColor.white, descriptionColor: UIColor.white, titleFont: titleFont, descriptionFont: descriptionFont),OnboardingItemInfo(informationImage: Asset.skill.image,
    title: ControllerConstants.Onboarding.skillListing,
    description: ControllerConstants.Onboarding.skillListingDescription,
    pageIcon: Asset.pageIcon.image,
    color: UIColor.loginOnboardingColor(),
    titleColor: UIColor.white, descriptionColor: UIColor.white, titleFont: titleFont, descriptionFont: descriptionFont),OnboardingItemInfo(informationImage: Asset.skillSettings.image,
    title: ControllerConstants.Onboarding.chatSettings,
    description: ControllerConstants.Onboarding.chatSettingsDescription,
    pageIcon: Asset.pageIcon.image,
    color: UIColor.iOSBlue(),
    titleColor: UIColor.white, descriptionColor: UIColor.white, titleFont: titleFont, descriptionFont: descriptionFont)]
    extension OnboardingViewController: PaperOnboardingDelegate, PaperOnboardingDataSource {
    func onboardingItemsCount() -> Int {
    return items.count
    }
    
    func onboardingItem(at index: Int) -> OnboardingItemInfo {
    return items[index]
    }
    }
    

     

  • Hiding/Showing Skip Buttons:
    func onboardingWillTransitonToIndex(_ index: Int) {
    skipButton.isHidden = index == 3 ? true : false
    bottomLoginSkipButton.isHidden = index == 3 ? false : true
    }
    

Resources:

Continue ReadingCreating Onboarding Screens for SUSI iOS

Implementing Version Control System for SUSI Skill CMS

SUSI Skill CMS now has a version control system where users can browse through all the previous revisions of a skill and roll back to a selected version. Users can modify existing skills and push the changes. So a skill could have been edited many times by the same or different users and so have many revisions. The version control functionalities help users to :

  • Browse through all the revisions of a selected skill
  • View the content of a selected revision
  • Compare any two selected revisions highlighting the changes
  • Option to edit and rollback to a selected revision.

Let us visit SUSI Skill CMS and try it out.

  1. Select a skill
  2. Click on versions button
  3. A table populated with previous revisions is displayed

  1. Clicking on a single revision opens the content of that version
  2. Selecting 2 versions and clicking on compare selected versions loads the content of the 2 selected revisions and shows the differences between the two.
  3. Clicking on Undo loads the selected revision and the latest version of that skill, highlighting the differences and also an editor loaded with the code of the selected revision to make changes and save to roll back.

How was this implemented?

Firstly, to get the previous revisions of a selected skill, we need the skills meta data including model, group, language and skill name which is used to make an ajax call to the server using the endpoint :

http://api.susi.ai/cms/getSkillHistory.json?model=MODEL&group=GROUP&language=LANGUAGE&skill=SKILL_NAME

We create a new component SkillVersion and pass the skill meta data in the pathname while accessing that component. The path where SkillVersion component is loaded is /:category/:skill/versions/:lang . We parse this data from the path and set our state with skill meta data. In componentDidMount we use this data to make the ajax call to the server to get all previous version data and update our state. A sample response from getSkillHistory endpoint looks like :

{
  "session": {
    "identity": {
      "type": "",
      "name": "",
      "anonymous":
    }
  },
  "commits": [
    {
      "commitRev": "",
      "author_mail": "AUTHOR_MAIL_ID",
      "author": "AUTOR_NAME",
      "commitID": "COMMIT_ID",
      "commit_message": "COMMIT_MESSAGE",
     "commitName": "COMMIT_NAME",
     "commitDate": "COMMIT_DATE"
    },
  ],
  "accepted": TRUE/FALSE
}

We now populate the table with the obtained revision history. We used Material UI Table for tabulating the data. The first 2 columns of the table have radio buttons to select any 2 revisions. The left side radio buttons are for selecting the older versions and the right side radio buttons to select the more recent versions. We keep track of the selected versions through onCheck function of the radio buttons and updating state accordingly.

if(side === 'right'){
  if(!(index >= currLeft)){
    rightChecks.fill(false);
    rightChecks[index] = true;
    currRight = index;
  }
}
else if(side === 'left'){
  if(!(index <= currRight)){
    leftChecks.fill(false);
    leftChecks[index] = true;
    currLeft = index;
  }
}
this.setState({
  currLeftChecked: currLeft,
  currRightChecked: currRight,
  leftChecks: leftChecks,
  rightChecks: rightChecks,
});

Once 2 versions are selected and we click on compare selected versions button, we get the currently selected versions stored from getCheckedCommits function and we are redirected to /:category/:skill/compare/:lang/:oldid/:recentid where we pass the selected 2 revisions commitIDs in the URL.

{(this.state.commitsChecked.length === 2) &&
<Link to={{
  pathname: '/'+this.state.skillMeta.groupValue+
            '/'+this.state.skillMeta.skillName+
            '/compare/'+this.state.skillMeta.languageValue+
            '/'+checkedCommits[0].commitID+
            '/'+checkedCommits[1].commitID,
}}>
  <RaisedButton
    label='Compare Selected Versions'
    backgroundColor='#4285f4'
    labelColor='#fff'
    style={compareBtnStyle}
  />
</Link>
}

SkillHistory Component is now loaded and the 2 selected revisions commitIDs are parsed from the URL pathname. Once we have the commitIDs we make ajax calls to the server to get the code for that particular commit. The skill meta data is also parsed from the URL path which is required to make the server call to getFileAtCommitID.

http://api.susi.ai/cms/getSkillHistory.json?model=MODEL&group=GROUP&language=LANGUAGE&skill=SKILL_NAME&commitID=COMMIT_ID

We make the ajax calls in componentDidMount and update the state with the received data. A sample response from getFileAtCommitID looks like :

{
  "accepted": TRUE/FALSE,
  "file": "CONTENT",
  "session": {
    "identity": {
       "type": "",
       "name": "",
       "anonymous":
    }
  }
}

We populate the code of each revision in an editor. We used react-ace as our editor component where we use the value prop to populate the content and display it in read-only mode.

<AceEditor
  mode='java'
  readOnly={true}
  theme={this.state.editorTheme}
  width='100%'
  fontSize={this.state.fontSizeCode}
  height= '400px'
  value={this.state.commitData[0].code}
  showPrintMargin={false}
  name='skill_code_editor'
  editorProps={{$blockScrolling: true}}
/>

We then show the differences between the 2 selected versions content. To compare and highlight the differences, we used react-diff package which takes in the content of both the commits as inputA and inputB props and we compare character by character using the type chars prop. Here input A is compared with input B. The component compares and returns the highlighted element which we display in a scrollable div preventing overflows.

{/* latest code should be inputB */}
<Diff
  inputA={this.state.commitData[0].code}
  inputB={this.state.commitData[1].code}
  type='chars'
/>

Clicking on Undo then redirects to /:category/:skill/edit/:lang/:latestid/:revertid where latest id is the commitID of the latest revision and revert id is the commitID of the oldest commit ID selected amongst the 2 commits selected initially. This redirects to SkillRollBack component where we again parse the skill meta data and the commit IDs from the URL pathname and call getFileAtCommitID to get the content for the latest and the reverting commit and again populate the content in editor using react-ace and also show the differences using react-diff and finally load the modify skill component where an editor is preloaded with the content of the reverting commit and a similar interface like modify skill is shown where user can edit the content of the reverting commit and push the changes.

let baseUrl = this.getSkillAtCommitIDUrl() ;
let self = this;
var url1 = baseUrl + self.state.latestCommit;
$.ajax({
  url: url1,
  jsonpCallback: 'pc',
  dataType: 'jsonp',
  jsonp: 'callback',
  crossDomain: true,
  success: function (data1) {
    var url2 = baseUrl + self.state.revertingCommit;
    $.ajax({
      url: url2,
      jsonpCallback: 'pd',
      dataType: 'jsonp',
      jsonp: 'callback',
      crossDomain: true,
      success: function (data2) {
        self.updateData([{
        code:data1.file,
        commitID:self.state.latestCommit,
      },{
        code:data2.file,
        commitID:self.state.revertingCommit,
      }])
      }
    });
  }
});

Here, we make nested ajax calls to maintain synchronization and update state after we receive data from both the calls else if we make ajax calls in a loop, then the second ajax call doesn’t wait for the first one to finish and is most likely to fail.

This is how the skill version system was implemented in SUSI Skill CMS. You can find the complete code at SUSI Skill CMS Repository. Feel free to contribute.

Resources:

Continue ReadingImplementing Version Control System for SUSI Skill CMS

Fetching Images for RSS Responses in SUSI Web Chat

Initially, SUSI Web Chat rendered RSS action type responses like this:

The response from the server initially only contained

  • Title
  • Description
  • Link

We needed to improvise the web search & RSS results display and also add images for the results.

The web search & RSS results are now rendered as :

How was this implemented?

SUSI AI uses Yacy to fetchRSSs feeds. Firstly the server using the console process to return the RSS feeds from Yacy needs to be configured to return images too.

"yacy":{
  "example":"http://127.0.0.1:4000/susi/console.json?q=%22SELECT%20title,%20link%20FROM%20yacy%20WHERE%20query=%27java%27;%22",
  "url":"http://yacy.searchlab.eu/solr/select?wt=yjson&q=",
  "test":"java",
  "parser":"json",
  "path":"$.channels[0].items",
  "license":""
}

In a console process, we provide the URL needed to fetch data from, the query parameter needed to be passed to the URL and the path to look for the answer in the API response.

  • url = <url>   – the URL to the remote JSON service which will be used to retrieve information. It must contain a $query$ string.
  • test = <parameter> – the parameter that will replace the $query$ string inside the given URL. It is required to test the service.

Here the URL used is :

http://yacy.searchlab.eu/solr/select?wt=yjson&q=QUERY

To include images in RSS action responses, we need to parse the images also from the Yacy response. For this, we need to add `image` in the selection rule while calling the console process

"process":[
  {
    "type":"console",
    "expression":"SELECT title,description,link FROM yacy WHERE query='$1$';"
  }
]

Now the response from the server for RSS action type will also include `image` along with title, description, and link. An example response for the query `Google` :

{
  "title": "Terms of Service | Google Analytics \u2013 Google",
  "description": "Read Google Analytics terms of service.",
  "link": "http://www.google.com/analytics/terms/",
  "image":   "https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_116x41dp.png",
}

However, the results at times, do not contain images because there are none stored in the index. This may happen if the result comes from p2p transmission within Yacy where no images are transmitted. So in cases where images are not returned by the server, we use the link preview service to preview the link and fetch the image.

The endpoint for previewing the link is :

BASE_URL+'/susi/linkPreview.json?url=URL'

On the client side, we first search the response for data objects with images in API actions. And the amongst the remaining data objects in answers[0].data, we preview the link to fetch image keeping a check on the count. This needs to be performed for processing the history cognitions too.To preview the remaining links in a loop, we cannot make ajax calls directly in a loop. To handle this, nested ajax calls are made using the function previewURLForImage() where we loop through the remaining links and on the success we decrement the count and call previewURLForImage() on the next link and on error we try previewURLForImage() on the next link without decrementing the count.

success: function (rssResponse) {
  if(rssResponse.accepted){
    respData.image = rssResponse.image;
    respData.descriptionShort = rssResponse.descriptionShort;
    receivedMessage.rssResults.push(respData);
  }
  if(receivedMessage.rssResults.length === count ||
    j === remainingDataIndices.length - 1){
    let message = ChatMessageUtils.getSUSIMessageData(receivedMessage, currentThreadID);
    ChatAppDispatcher.dispatch({
      type: ActionTypes.CREATE_SUSI_MESSAGE,
      message
    });
  }
  else{
    j+=1;
    previewURLForImage(receivedMessage,currentThreadID,
BASE_URL,data,count,remainingDataIndices,j);
  }
},

And we store the results as rssResults which are used in MessageListItems to fetch the data and render. The nested calling of previewURLForImage() ends when we have the required count of results or we have finished trying all links for previewing images. We then dispatch the message to the message store. We now improvise the UI. I used Material UI Cards to display the results and for the carousel like display, react-slick.

<Card className={cardClass} key={i} onClick={() => {
  window.open(tile.link,'_blank')
}}>
  {tile.image &&
    (
      <CardMedia>
        <img src={tile.image} alt="" className='card-img'/>
      </CardMedia>
    )
  }
  <CardTitle title={tile.title} titleStyle={titleStyle}/>
  <CardText>
    <div className='card-text'>{cardText}</div>
    <div className='card-url'>{urlDomain(tile.link)}</div>
  </CardText>
</Card>

We used the full width of the message section to display the results by not wrapping the result in message-list-item class. The entire card is hyperlinked to the link. Along with title and description, the URL info is also shown at the bottom right. To get the domain name from the link, urlDomain() function is used which makes use of the HTML anchor tag to get the domain info.

function urlDomain(data) {
  var a = document.createElement('a');
  a.href = data;
  return a.hostname;
}

To prevent stretching of images we use `object-fit: contain;` to make the images fit the image container and align it to the middle.

We finally have our RSS results with images and an improvised UI. The complete code can be found at SUSI WebChat Repo. Feel free to contribute

Resources
Continue ReadingFetching Images for RSS Responses in SUSI Web Chat

Implementing Text To Speech Settings in SUSI WebChat

SUSI Web Chat has Text to Speech (TTS) Feature where it gives voice replies for user queries. The Text to Speech functionality was added using Speech Synthesis Feature of the Web Speech API. The Text to Speech Settings were added to customise the speech output by controlling features like :

  1. Language
  2. Rate
  3. Pitch

Let us visit SUSI Web Chat and try it out.

First, ensure that the settings have SpeechOutput or SpeechOutputAlways enabled. Then click on the Mic button and ask a query. SUSI responds to your query with a voice reply.

To control the Speech Output, visit Text To Speech Settings in the /settings route.

First, let us look at the language settings. The drop down list for Language is populated when the app is initialised. speechSynthesis.onvoiceschanged function is triggered when the app loads initially. There we call speechSynthesis.getVoices() to get the voice list of all the languages currently supported by that particular browser. We store this in MessageStore using ActionTypes.INIT_TTS_VOICES action type.

window.speechSynthesis.onvoiceschanged = function () {
  if (!MessageStore.getTTSInitStatus()) {
    var speechSynthesisVoices = speechSynthesis.getVoices();
    Actions.getTTSLangText(speechSynthesisVoices);
    Actions.initialiseTTSVoices(speechSynthesisVoices);
  }
};

We also get the translated text for every language present in the voice list for the text – `This is an example of speech synthesis` using google translate API. This is called initially for all the languages and is stored as translatedText attribute in the voice list for each element. This is later used when the user wants to listen to an example of speech output for a selected language, rate and pitch.

https://translate.googleapis.com/translate_a/single?client=gtx&sl=en-US&tl=TARGET_LANGUAGE_CODE&dt=t&q=TEXT_TO_BE_TRANSLATED

When the user visits the Text To Speech Settings, then the voice list stored in the MessageStore is retrieved and the drop down menu for Language is populated. The default language is fetched from UserPreferencesStore and the default language is accordingly highlighted in the dropdown. The list is parsed and populated as a drop down using populateVoiceList() function.

let voiceMenu = voices.map((voice,index) => {
  if(voice.translatedText === null){
    voice.translatedText = this.speechSynthesisExample;
  }
  langCodes.push(voice.lang);
  return(
    <MenuItem value={voice.lang}
              key={index}
              primaryText={voice.name+' ('+voice.lang+')'} />
  );
});

The language selected using this dropdown is only used as the language for the speech output when the server doesn’t specify the language in its response and the browser language is undefined. We then create sliders using Material UI for adjusting speech rate and pitch.

<h4 style={{'marginBottom':'0px'}}><Translate text="Speech Rate"/></h4>
<Slider
  min={0.5}
  max={2}
  value={this.state.rate}
  onChange={this.handleRate} />

The range for the sliders is :

  • Rate : 0.5 – 2
  • Pitch : 0 – 2

The default value for both rate and pitch is 1. We create a controlled slider saving the values in state and using onChange function to record change in values. The Reset buttons can be used to reset the rate and pitch values respectively to their default values. Once the language, rate and pitch values have been selected we can click on `Play a short demonstration of speech synthesis`  to listen to a voice reply with the chosen settings.

{ this.state.playExample &&
  (
    <VoicePlayer
       play={this.state.play}
       text={voiceOutput.voiceText}
       rate={this.state.rate}
       pitch={this.state.pitch}
       lang={this.state.ttsLanguage}
       onStart={this.onStart}
       onEnd={this.onEnd}
    />
  )
}

We use the VoicePlayer by passing the required props to get the speech output. onStart and onEnd functions are triggered at the beginning and ending of the speech synthesis and are used to control the state from the parent component. Chosen language, rate, pitch and translated text are passed as props to VoicePlayer which creates a new SpeechSynthesisUtterance() with the passed props and plays the speech output.

On saving these settings and then using the Mic button to get voice replies we see that the voice output is controlled according to the selected settings.

Finally, we have to store the selected settings on the server and ensure that these are pulled when the app is initialized. The format in which these settings are stored in the server is :

Speech Rate

- Used to control rate of speech output.
- SETTING_NAME :  `speechRate`
- SETTING_VALUE : `0.5 - 2`
- DEFAULT_VALUE : `1`
 
Speech Pitch

- Used to control pitch of speech output.
- SETTING_NAME :  `speechPitch`
- SETTING_VALUE : `0 - 2`
- DEFAULT_VALUE : `1`
 
TTS Language

- Used to set the language for Text-To-Speech used when the response from server doesnt specify language and the browser language is also undefined.
- SETTING_NAME :  `ttsLanguage`
- SETTING_VALUE : `Language Code (string)`
- DEFAULT_VALUE : `en-US`

This is how the Text To Speech Settings were implemented in SUSI Web Chat. The complete code can be found at SUSI Web Chat Repository.

PS: To test whether your browser supports Text To Speech, open your browser console and try the following :

  • var msg = new SpeechSynthesisUtterance(‘Hello World’);
  • window.speechSynthesis.speak(msg)

If you get a speech output then the Web API Speech Synthesis is supported by your browser and Text To Speech features of SUSI Web Chat will work. The Web Speech API has support for all latest Chrome browsers as mentioned in the Web Speech API Mozilla docs.However there are few bugs with some Chromium versions please check out more on how to fix them locally here in this link.

Resources:

 

 

Continue ReadingImplementing Text To Speech Settings in SUSI WebChat

Generating Map Action Responses in SUSI AI

SUSI AI responds to location related user queries with a Map action response. The different types of responses are referred to as actions which tell the client how to render the answer. One such action type is the Map action type. The map action contains latitude, longitude and zoom values telling the client to correspondingly render a map with the given location.

Let us visit SUSI Web Chat and try it out.

Query: Where is London

Response: (API Response)

The API Response actions contain text describing the specified location, an anchor with text ‘Here is a map` linked to openstreetmaps and a map with the location coordinates.

Let us look at how this is implemented on server.

For location related queries, the key where is used as an identifier. Once the query is matched with this key, a regular expression `where is (?:(?:a )*)(.*)` is used to parse the location name.

"keys"   : ["where"],
"phrases": [
  {"type":"regex", "expression":"where is (?:(?:a )*)(.*)"},
]

The parsed location name is stored in $1$ and is used to make API calls to fetch information about the place and its location. Console process is used to fetch required data from an API.

"process": [
  {
    "type":"console",
    "expression":"SELECT location[0] AS lon, location[1] AS lat FROM locations WHERE query='$1$';"},
  {
    "type":"console",
    "expression":"SELECT object AS locationInfo FROM location-info WHERE query='$1$';"}
],

Here, we need to make two API calls :

  • For getting information about the place
  • For getting the location coordinates

First let us look at how a Console Process works. In a console process we provide the URL needed to fetch data from, the query parameter needed to be passed to the URL and the path to look for the answer in the API response.

  • url = <url>   – the url to the remote json service which will be used to retrieve information. It must contain a $query$ string.
  • test = <parameter> – the parameter that will replace the $query$ string inside the given url. It is required to test the service.

For getting the information about the place, we used Wikipedia API. We name this console process as location-info and added the required attributes to run it and fetch data from the API.

"location-info": {
  "example":"http://127.0.0.1:4000/susi/console.json?q=%22SELECT%20*%20FROM%20location-info%20WHERE%20query=%27london%27;%22",
  "url":"https://en.wikipedia.org/w/api.php?action=opensearch&limit=1&format=json&search=",
  "test":"london",
  "parser":"json",
  "path":"$.[2]",
  "license":"Copyright by Wikipedia, https://wikimediafoundation.org/wiki/Terms_of_Use/en"
}

The attributes used are :

  • url : The Media WIKI API endpoint
  • test : The Location name which will be appended to the url before making the API call.
  • parser : Specifies the response type for parsing the answer
  • path : Points to the location in the response where the required answer is present

The API endpoint called is of the following format :

https://en.wikipedia.org/w/api.php?action=opensearch&limit=1&format=json&search=LOCATION_NAME

For the query where is london, the API call made returns

[
  "london",
  ["London"],
  ["London  is the capital and most populous city of England and the United Kingdom."],
  ["https://en.wikipedia.org/wiki/London"]
]

The path $.[2] points to the third element of the array i.e “London  is the capital and most populous city of England and the United Kingdom.” which is stored in $locationInfo$.

Similarly to get the location coordinates, another API call is made to loklak API.

"locations": {
  "example":"http://127.0.0.1:4000/susi/console.json?q=%22SELECT%20*%20FROM%20locations%20WHERE%20query=%27rome%27;%22",
  "url":"http://api.loklak.org/api/console.json?q=SELECT%20*%20FROM%20locations%20WHERE%20location='$query$';",
  "test":"rome",
  "parser":"json",
  "path":"$.data",
  "license":"Copyright by GeoNames"
},

The location coordinates are found in $.data.location in the API response. The location coordinates are stored as latitude and longitude in $lat$ and $lon$ respectively.

Finally we have description about the location and its coordinates, so we create the actions to be put in the server response.

The first action is of type answer and the text to be displayed is given by $locationInfo$ where the data from wikipedia API response is stored.

{
  "type":"answer",
  "select":"random",
  "phrases":["$locationInfo$"]
},

The second action is of type anchor. The text to be displayed is `Here is a map` and it must be hyperlinked to openstreetmaps with the obtained $lat$ and $lon$.

{
  "type":"anchor",
  "link":"https://www.openstreetmap.org/#map=13/$lat$/$lon$",
  "text":"Here is a map"
},

The last action is of type map which is populated for latitude and longitude using $lat$ and $lon$ respectively and the zoom value is specified to be 13.

{
  "type":"map",
  "latitude":"$lat$",
  "longitude":"$lon$",
  "zoom":"13"
}

Final output from the server will now contain the three actions with the required data obtained from the respective API calls made. For the sample query `where is london` , the actions will look like :

"actions": [
  {
    "type": "answer",
    "language": "en",
    "expression": "London  is the capital and most populous city of England and the United Kingdom."
  },
  {
    "type": "anchor",
    "link":   "https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228",
    "text": "Here is a map",
    "language": "en"
  },
  {
    "type": "map",
    "latitude": "51.51279067225417",
    "longitude": "-0.09184009399817228",
    "zoom": "13",
    "language": "en"
  }
],

This is how the map action responses are generated for location related queries. The complete code can be found at SUSI AI Server Repository.

Resources:

Continue ReadingGenerating Map Action Responses in SUSI AI