Implementation of Shimmer Effect in Layouts in SUSI.AI Android App

The shimmer effect was created by Facebook to indicate the loading of data in pages where data is being loaded from the internet. This was created as an alternative for the existing ProgressBar and the usual loader to give better user experience with UI. Let's get started to see how we can implement it. Here, I am going to use SUSI.AI (a smart assistant app) as a reference app to show a code demonstration. I am working on this project in my GSoC period and while working I found the need to implement this feature in many places. So, I am writing this blog to share my experience with how, I implemented it in the app. First of all, we need to add the shimmer dependency in the app level Gradle file. Now, we need to create a placeholder layout simply by using views. This placeholder should resemble the actual layout. Usually, grey-colored is preferred in the placeholder background. A placeholder should not have any text written. It should be viewed only. Let’s consider the placeholder used in susi. Now let's have a glance at the actual items whose placeholders we have made. Now, after the creation of the placeholder, we need to add this placeholder in the main layout file. It is done in the following way: Here, I have added the placeholders 6 times so that the entire screen gets covered up. You can add it as many times as you want. The next and the final task is to start and stop the shimmer effect according to the logic of the code. Here, the shimmer starts as soon as the fragment is created and stops when the data is successfully loaded from the server. Have a look at how to create the reference. First of all, we need to create a reference to the shimmer. Then we use this reference to start/stop the shimmer effect. Here, in Kotlin we can directly use the id used in layout without creating any reference. We start the shimmer effect simply by using startShimmer() function in the shimmer reference. Similarly, we can stop it using stopShimmer() function in the reference. Resources:  Framework: Shimmer in Android Documentation: Shimmer,  Android Design SUSI.AI Android App: PlayStore GitHub Tags: SUSI.AI Android App, Kotlin, SUSI.AI, FOSSASIA, GSoC, Android, Shimmer

Continue ReadingImplementation of Shimmer Effect in Layouts in SUSI.AI Android App

Gestures in SUSI.AI Android

Gestures have become one of the most widely used features by a user. The user usually, expects that some tasks should be performed by the app when he or she executes some gestures on the screen. A "touch gesture" occurs when a user places one or more fingers on the touch screen, and your application interprets that pattern of touches as a particular gesture. There are correspondingly two phases to gesture detection: Gather data about touch events.Interpret the data to see if it meets the criteria for any of the gestures your app supports. There are various kinds of gestures supported by android. Some of them are: TapDouble Tap2-finger Tap2-finger-double tap3-finger tapPinch In this post, we will go through the SUSI.AI android app (a smart assistant app) which has the “Right to left swipe” gesture detector in use. When such kind of gesture is detected inside the Chat Activity, it opens the Skill’s Activity. This makes the app very user-friendly. Before we start implementing the code,  go through the steps mentioned above in detail. 1st Step “Gather Data”:  When a user places one or more fingers on the screen, this triggers the callback onTouchEvent() on the View that received the touch events. For each sequence of touch events (position, pressure, size, the addition of another finger, etc.) that is ultimately identified as a gesture, onTouchEvent() is fired several times. The gesture starts when the user first touches the screen, continues as the system tracks the position of the user's finger(s), and ends by capturing the final event of the user's fingers leaving the screen. Throughout this interaction, the MotionEvent delivered to onTouchEvent() provides the details of every interaction. Your app can use the data provided by the MotionEvent to determine if a gesture it cares about happened. 2nd Step “Data Interpretation”: The data received needs to be properly interpreted. The gestures should be properly recognized and processed to perform further actions. Like an app might have different gestures integrated into the same page live “Swipe-to-refresh”, “Double-tap”, “Single tap”, etc. Upon successfully differentiating this kind of gesture, further functions/tasks should be executed. Let's go through the code present in SUSI now. First of all, a new class is created here “CustomGestureListener”. This class extends the “SimpleOnGestureListener” which is a part of the “GestureDetector” library of android. This class contains a function “onFling”. This function determines the gestures across the horizontal axis. event1.getX(), and event2.getX() functions says about the gesture values across the horizontal axis of the device. Here, when the value of X becomes getter than 0, it actually indicates that the user has swiped from right to left. This becomes active even in very minor change, which users might have presses accidentally, or has just touched the screen. So to avoid such minor impulses, we set a value that we will execute our task only when the value of X lies between 100 and 1000. This avoids minor gestures. Inside the onCreate method, a new CustomGestureListener instance is created, passing…

Continue ReadingGestures in SUSI.AI Android

Implementing a Chat Bubble in SUSI.AI

SUSI.AI now has a chat bubble on the bottom right of every page to assist you. Chat Bubble allows you to connect with susi.ai on just a click. You can now directly play test examples of skill on chatbot. It can also be viewed on full screen mode. Redux Code Redux Action associated with chat bubble is handleChatBubble. Redux state chatBubble can have 3 states: minimised - chat bubble is not visible. This set state is set when the chat is viewed in full screen mode.bubble - only the chat bubble is visible. This state is set when close icon is clicked and on toggle.full - the chat bubble along with message section is visible. This state is set when minimize icon is clicked on full screen mode and on toggle. const defaultState = { chatBubble: 'bubble', }; export default handleActions( { [actionTypes.HANDLE_CHAT_BUBBLE](state, { payload }) { return { ...state, chatBubble: payload.chatBubble, }; }, }, defaultState, ); Speech box for skill example The user can click on the speech box for skill example and immediately view the answer for the skill on the chatbot. When a speech bubble is clicked a query parameter testExample is added to the URL. The value of this query parameter is resolved and answered by Message Composer. To be able to generate an answer bubble again and again for the same query, we have a reducer state testSkillExampleKey which is updated when the user clicks on the speech box. This is passed as key parameter to messageSection. Chat Bubble Code The functions involved in the working of chatBubble code are: openFullScreen - This function is called when the full screen icon is clicked in the tab and laptop view and also when chat bubble is clicked in mobile view. This opens up a full screen dialog with message section. It dispatches handleChatBubble action which sets the chatBubble reducer state as minimised.closeFullScreen - This function is called when the exit full screen icon is clicked. It dispatches a handleChatBubble action which sets the chatBubble reducer state as full.toggleChat -  This function is called when the user clicks on the chat bubble. It dispatches handleChatBubble action which toggles the chatBubble reducer state between full and bubble.handleClose - This function is called when the user clicks on the close icon. It dispatches handleChatBubble action which sets the chatBubble reducer state to bubble. openFullScreen = () => { const { actions } = this.props; actions.handleChatBubble({ chatBubble: 'minimised', }); actions.openModal({ modalType: 'chatBubble', fullScreenChat: true, }); }; closeFullScreen = () => { const { actions } = this.props; actions.handleChatBubble({ chatBubble: 'full', }); actions.closeModal(); }; toggleChat = () => { const { actions, chatBubble } = this.props; actions.handleChatBubble({ chatBubble: chatBubble === 'bubble' ? 'full' : 'bubble', }); }; handleClose = () => { const { actions } = this.props; actions.handleChatBubble({ chatBubble: 'bubble' }); actions.closeModal(); }; Message Section Code (Reduced) The message section comprises of three parts the actionBar, messageSection and the message Composer. Action Bar The actionBar consists of the action buttons…

Continue ReadingImplementing a Chat Bubble in SUSI.AI

Adding 3D Home Screen Quick Actions to SUSI iOS App

Home screen quick actions are a convenient way to perform useful, app-specific actions right from the Home screen, using 3D Touch. Apply a little pressure to an app icon with your finger—more than you use for tap and hold—to see a list of available quick actions. Tap one to activate it. Quick actions can be static or dynamic. We have added some 3D home screen quick action to our SUSI iOS app. In this post, we will see how they are implemented and how they work. The following 3D home screen quick actions are added to SUSI iOS: Open SUSI Skills - user can directly go to SUSI skills without opening a chat screen.Customize Settings - user can customize their setting directly by using this quick action.Setup A Device - when the user quickly wants to configure his/her device for SUSI Smart Speaker, this is quick action is very helpful in that.Change SUSI’s Voice - user can change SUSI message reading language accents directly from this quick action. Each Home screen quick action includes a title, an icon on the left or right (depending on your app’s position on the home screen), and an optional subtitle. The title and subtitle are always left-aligned in left-to-right languages. Step 1 - Adding the Shortcut Items We add static home screen quick actions using the UIApplicationShortcutItems array in the app Info.plist file. Each entry in the array is a dictionary containing items matching properties of the UIApplicationShortcutItem class. As seen in screenshot below, we have 4 shortcut items and each item have three properties UIApplicationShortcutItemIconType/UIApplicationShortcutItemIconFile, UIApplicationShortcutItemTitle, and UIApplicationShortcutItemType. UIApplicationShortcutItemIconType and UIApplicationShortcutItemIconFile is the string for setting icon for quick action. For the system icons, we use UIApplicationShortcutItemIconType property and for the custom icons, we use UIApplicationShortcutItemIconFile.UIApplicationShortcutItemTitle is a required string that is displayed to the user.UIApplicationShortcutItemType is a required app specific string used to identify the quick action. Step 2 - Handling the Shortcut AppDelegate is the place where we handle all the home screen quick actions. We define these variables: var shortcutHandled: Bool! var shortcutIdentifier: String? When a user chooses one of the quick actions the launch of the system or resumes the app and calls the performActionForShortcutItem method in app delegate: func application(_ application: UIApplication, performActionFor shortcutItem: UIApplicationShortcutItem, completionHandler: @escaping (Bool) -> Void) { shortcutIdentifier = shortcutItem.type shortcutHandled = true completionHandler(shortcutHandled) } Whenever the application becomes active, applicationDidBecomeActive function is called: func applicationDidBecomeActive(_ application: UIApplication) { // Handel Home Screen Quick Actions handelHomeActions() } Inside the applicationDidBecomeActive function we call the handleHomeAction() method which handles the home screen quick action. func handelHomeActions() { if shortcutHandled == true { shortcutHandled = false if shortcutIdentifier == ControllerConstants.HomeActions.openSkillAction { // Handle action accordingly } } else if shortcutIdentifier == ControllerConstants.HomeActions.customizeSettingsAction { // Handle action accordingly } else if shortcutIdentifier == ControllerConstants.HomeActions.setupDeviceAction { // Handle action accordingly } } else if shortcutIdentifier == ControllerConstants.HomeActions.changeVoiceAction { // Handle action accordingly } } } } Final Output: Resources - Home Screen Quick Actions - Human Interface Guidelines…

Continue ReadingAdding 3D Home Screen Quick Actions to SUSI iOS App

Adding report skills feature in SUSI iOS

SUSI.AI is having a various type of Skills that improving the user experience. Skills are powering the SUSI.AI personal chat assistant. SUSI skills have a nice feedback system. We have three different feedback way for SUSI skills, 5-star rating system, posting feedback, and reporting skills. 5-Star Rating - rate skills from 1 (lowest) to 5 (highest) star Posting Feedback - user can post feedback about particular skill Report Skill - user can report skill if he/she found it inappropriate In this post, we will see how reporting skills feature work in SUSI iOS and how it is implemented. You can learn about 5-star rating here and posting feedback feature here. Adding report skill button - let reportSkillButton: UIButton = { let button = UIButton(type: .system) button.contentHorizontalAlignment = .left button.setTitle("Report Skill", for: .normal) button.setTitleColor(UIColor.iOSGray(), for: .normal) button.titleLabel?.font = UIFont.systemFont(ofSize: 16) button.translatesAutoresizingMaskIntoConstraints = false return button }() In above, we have set translatesAutoresizingMaskIntoConstraints property for false. By default, the property is set to true for any view you programmatically create. If you add views in Interface Builder, the system automatically sets this property to false. If this property’s value is true, the system creates a set of constraints that duplicate the behavior specified by the view’s autoresizing mask. Setting up report skill button - We are setting constraints programmatically as we created button programmatically and set translatesAutoresizingMaskIntoConstraints to false. Also, setting a target to the button. if let delegate = UIApplication.shared.delegate as? AppDelegate, let _ = delegate.currentUser { view.addSubview(reportSkillButton) reportSkillButton.widthAnchor.constraint(equalToConstant: 140).isActive = true reportSkillButton.heightAnchor.constraint(equalToConstant: 32).isActive = true reportSkillButton.leftAnchor.constraint(equalTo: contentType.leftAnchor).isActive = true reportSkillButton.topAnchor.constraint(equalTo: contentType.bottomAnchor, constant: 8).isActive = true reportSkillButton.addTarget(self, action: #selector(reportSkillAction), for: .touchUpInside) } In the above method, we can see that we are only showing button if user is logged-in. Only a logged-in user can report the skill. To check if user is logged in or not, we are using the AppDelegate shared instance where we save the logged-in user globally when the user signs in. When user clicks the Report Skill button, a popup is open up with a text field for feedback message like below: This is how UI look like! When user clicks the Report action after typing feedback message, we are using the following endpoint: https://api.susi.ai/cms/reportSkill.json With the following parameters - ModelGroupSkillLanguageAccess TokenFeedback Here is how we are handling the API call within our app - func reportSkill(feedbackMessage: String) { if let delegate = UIApplication.shared.delegate as? AppDelegate, let user = delegate.currentUser { let params = [ Client.SkillListing.model: skill?.model as AnyObject, Client.SkillListing.group: skill?.group as AnyObject, Client.SkillListing.skill: skill?.skillKeyName as AnyObject, Client.SkillListing.language: Locale.current.languageCode as AnyObject, Client.SkillListing.accessToken: user.accessToken as AnyObject, Client.SkillListing.feedback: feedbackMessage as AnyObject ] Client.sharedInstance.reportSkill(params) { (success, error) in DispatchQueue.main.async { if success { self.view.makeToast("Skill reported successfully") } else if let error = error { self.view.makeToast(error) } } } } } On successfully reported skill, we show a toast with ‘Skill reported successfully’ message and if there is error reporting the skills, we present the toast with error as a message. Resources - SUSI Skills: https://skills.susi.ai/Apple’s documentations on translatesAutoresizingMaskIntoConstraintsAllowing user to…

Continue ReadingAdding report skills feature in SUSI iOS

Connecting SUSI iOS App to SUSI Smart Speaker

SUSI Smart Speaker is an Open Source speaker with many exciting features. The user needs an Android or iOS device to set up the speaker. You can refer this post for initial connection to SUSI Smart Speaker. In this post, we will see how a user can connect SUSI Smart Speaker to iOS devices (iPhone/iPad). Implementation - The first step is to detect whether an iOS device connects to SUSI.AI hotspot or not. For this, we match the currently connected wifi SSID with SUSI.AI hotspot SSID. If it matches, we show the connected device in Device Activity to proceed further with setups. Choosing Room - Room name is basically the location of your SUSI Smart Speaker in the home. You may have multiple SUSI Smart Speaker in different rooms, so the purpose of adding the room is to differentiate between them. When the user clicks on Wi-Fi displayed cell, it starts the initial setups. We are using didSelectRowAt method of UITableViewDelegate to get which cell is selected. On clicking the displayed Wi-Fi cell, a popup is open with a Room Location Text field. override func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) { if indexPath.row == 0, let speakerSSID = fetchSSIDInfo(), speakerSSID == ControllerConstants.DeviceActivity.susiSSID { // Open a popup to select Rooms presentRoomsPopup() } } When the user clicks the Next button, we send the speaker room location to the local server of the speaker by the following API endpoint with room name as a parameter: http://10.0.0.1:5000/speaker_config/ Refer this post for getting more detail about how choosing room work and how it is implemented in SUSI iOS. Sharing Wi-Fi Credentials - On successfully choosing the room, we present a popup that asks the user to enter the Wi-Fi credentials of previously connected Wi-Fi so that we can connect our Smart Speaker to the wifi which can provide internet connection to play music and set commands over the speaker. We present a popup with a text field for entering wifi password. When the user clicks the Next button, we share the wifi credentials to wifi by the following API endpoint: http://10.0.0.1:5000/wifi_credentials/ With the following params- Wifissid - Connected Wi-Fi SSID Wifipassd - Connected Wi-Fi password In this API endpoint, we are sharing wifi SSID and wifi password with Smart Speaker. If the credentials successfully accepted by speaker than we present a popup for user SUSI account password, otherwise we again present Enter Wifi Credentials popup. Client.sharedInstance.sendWifiCredentials(params) { (success, message) in DispatchQueue.main.async { self.alertController.dismiss(animated: true, completion: nil) if success { self.presentUserPasswordPopup() } else { self.view.makeToast("", point: self.view.center, title: message, image: nil, completion: { didTap in UIApplication.shared.endIgnoringInteractionEvents() self.presentWifiCredentialsPopup() }) } } }   Sharing SUSI Account Credentials - In the method above we have seen that when SUSI Smart Speaker accept the wifi credentials, we proceed further with SUSI account credentials. We open a popup to Enter user’s SUSI account password: When the user clicks the Next button, we use following API endpoint to share user’s SUSI account credentials to SUSI Smart Speaker: http://10.0.0.1:5000/auth/…

Continue ReadingConnecting SUSI iOS App to SUSI Smart Speaker

A Workflow of Auto Executing Services on SUSI.AI Smart Speaker

As we plan to create a headless client on RaspberryPi, the requirement was that the SUSI.AI programs should run automatically. To do so, we had to figure out a way to boot up various scripts on startup. We had the following options to execute the scripts on startup: Editing Rc.local file Systemd Rules Crontab We decided to proceed with Systemd Rules because using Rc.local and Crontab requires modifying the default system files which in case of any error would make the os functionalities to crash very soon. We then created the SystemD rules for the following services: 1.factory-daemon.service 2. python-flask.service 3. susi-server.service 4. update-daemon.service 5. susi-linux.service Now I’ll demonstrate the working and the functionality of each service being implemented. 1. Factory-Daemon Service This service initiates the factory daemon with the raspberry Pi startup and then keeps it running continuously looking for any input from the GPiO port. [Unit] Description=SUSI Linux Factory Daemon After=multi-user.target [Service] Type=simple ExecStart=/usr/bin/python3 /home/pi/SUSI.AI/susi_linux/factory_reset/factory_reset.py [Install] WantedBy=multi-user.target 2. Python-Flask Service This service starts a python Server to allow handshake between mobile apps and the Smart Speaker which will allow the user to configure SUSI Smart Speaker accordingly. [Unit] Description=Python Server for SUSI Linux After=multi-user.target [Service] Type=simple ExecStart=/usr/bin/python3  /home/pi/SUSI.AI/susi_linux/access_point/server/server.py [Install] WantedBy=multi-user.target 3.SUSI-Server Service This service starts the Local SUSI Server as soon as the Raspberry Pi starts up which in turn allows the SUSI Linux programs to fetch responses of queries very quickly. [Unit] Description=Starting SUSI Server for SUSI Linux After=multi-user.target [Service] Type=oneshot ExecStart=/home/pi/SUSI.AI/susi_linux/susi_server/susi_server/bin/restart.sh [Install] WantedBy=multi-user.target 4. Update-Daemon Service This Service creates a Daemon which starts with the Raspberry Pi and fetches the latest updates from the repository from the upstream branch. [Unit] Description=Update Check- SUSI Linux Wants=network-online.target After=network-online.target [Service] Type=oneshot ExecStart=/home/pi/SUSI.AI/susi_linux/update_daemon/update_check.sh [Install] WantedBy=multi-user.target 5. SUSI-Linux Service This Service finally runs the main SUSI Linux software after everything has started. [Unit] Description=Starting SUSI Linux Wants=network-online.target After=network-online.target [Service] Type=idle WorkingDirectory=/home/pi/SUSI.AI/susi_linux/ ExecStart=/usr/bin/python3 -m main [Install] WantedBy=multi-user.target This blog gives a brief workflow of auto-executing services on SUSI Smart Speaker. Resources Systemd Documentation: https://coreos.com/os/docs/latest/using-systemd-and-udev-rules.html Crontab Documentation: http://man7.org/linux/man-pages/man5/crontab.5.html RC.Local Documentation: https://www.raspberrypi.org/documentation/linux/usage/rc-local.md

Continue ReadingA Workflow of Auto Executing Services on SUSI.AI Smart Speaker

Configuring LED Lights with SUSI Smart Speaker

To make the SUSI Smart Speaker more interactive and to improve the visual aesthetics, we configured SUSI Smart Speaker’s response with 3 RGB led lights. We have used a new piHat as an external hardware to configure the LEDs. Now the new hardware specs of the SUSI Smart Speaker are: Raspberry Pi ReSpeaker PiHat 2 Mic Array External Speakers Using an external PiHat not only added the RGB light functionality but also eliminated the need to use a USB microphone and configured a factory reset button Configuring the PiHat as the default Audio driver To Use the PiHat as the default input driver, we use the package called PulseAudio. And we use the following command in the installation script. pacmd set-sink-port alsa_output.platform-soc_sound.analog-stereo analog-output-headphones Configuring PiHat’s GPIO Button with Factory Reset There is an onboard User Button, which is connected to GPIO17. We use the python library RPi.GPIO to detect the user button. The python script is used in the following way GPIO.setmode(GPIO.BCM) GPIO.setup(17,GPIO.IN) i = 1 while True: if GPIO.input(17) == 1:        time.sleep(0.1)        pass    elif GPIO.input(17) == 0 :        start = time.time()        while GPIO.input(17) == 0 :            time.sleep(0.1)        end = time.time()        total = end - start        if total >= 7 :            subprocess.call(['bash','factory_reset.sh'])  # nosec #pylint-disable type: ignore        else :            mixer = alsaaudio.Mixer()            value = mixer.getvolume()[0]            if value != 0:                mixer.setvolume(0)            else:                mixer.setvolume(50)        print(total)        time.sleep(0.1)   This script checks on the button which is configured on GPIO port 17 on the PiHat. If the button is pressed for than 7 secs, the factory reset process takes place, else the device is muted. Configuring PiHat’s LED with Speaker’s Response We use a python library called SPIDEV to sync the LED lights with SUSI’s response. SPIDEV is usually used to send a response to the bus devices on the Raspberry Pi. The first step was installing spidev sudo pip install spidev Now we create a class where we store all the methods where we send the signal to the bus port. We treat the LED lights as a circular array and then have a rotation of RGB lights class LED_COLOR:     # Constants    MAX_BRIGHTNESS = 0b11111    LED_START = 0b11100000     def __init__(self, num_led, global_brightness=MAX_BRIGHTNESS,                 order='rgb', bus=0, device=1, max_speed_hz=8000000):        self.num_led = num_led        order = order.lower()        self.rgb = RGB_MAP.get(order, RGB_MAP['rgb'])        if global_brightness > self.MAX_BRIGHTNESS:            self.global_brightness = self.MAX_BRIGHTNESS        else:            self.global_brightness = global_brightness         self.leds = [self.LED_START, 0, 0, 0] * self.num_led        self.spi = spidev.SpiDev()        self.spi.open(bus, device)        if max_speed_hz:            self.spi.max_speed_hz = max_speed_hz     def clear_strip(self):         for led in range(self.num_led):            self.set_pixel(led, 0, 0, 0)        self.show()     def set_pixel(self, led_num, red, green, blue, bright_percent=100):        if led_num < 0:            return          if led_num >= self.num_led:            return        brightness = int(ceil(bright_percent * self.global_brightness / 100.0))        ledstart = (brightness & 0b00011111) | self.LED_START         start_index = 4 * led_num        self.leds[start_index] = ledstart        self.leds[start_index + self.rgb[0]] = red        self.leds[start_index + self.rgb[1]] = green        self.leds[start_index + self.rgb[2]] = blue     def set_pixel_rgb(self, led_num, rgb_color, bright_percent=100):        self.set_pixel(led_num, (rgb_color & 0xFF0000) >> 16,                       (rgb_color & 0x00FF00) >> 8, rgb_color & 0x0000FF, bright_percent)     def rotate(self, positions=1):        cutoff = 4 * (positions % self.num_led)        self.leds = self.leds[cutoff:]…

Continue ReadingConfiguring LED Lights with SUSI Smart Speaker

Connecting the Smart Speaker with Mobile Clients

The beauty of SUSI Smart Speaker lies in it being customizable according to the user’s needs. And we allow the user to customize it by providing an interface through the mobile clients. To do so, we create a local server on the Raspberry Pi itself. The Raspberry Pi is started in an Access Point mode and the mobile clients hit the endpoints in a specific order and then the configuration is sent to the server and stored according to the user.   The following API’s are required to be executed by the mobile clients 1> /speaker_config 2> /wifi_credentials 3> /auth 4> /config   The following is the order of API execution 1. /speaker_config This endpoint only takes the room name as a parameter. And then send send to the server to store the location of the device under the user’s account def speaker_config():    room_name = request.args.get('room_name')    config = json_config.connect(config_json_folder)    config['room_name'] = rogom_name   2. /wifi_credentials This endpoint takes the wifi ssid and wifi password as the parameters and then stores it in the raspberry Pi wifi config file.   def wifi_config():    wifi_ssid = request.args.get('wifissid')    wifi_password = request.args.get('wifipassd')    subprocess.call(['sudo', 'bash', wifi_search_folder + '/wifi_search.sh', wifi_ssid, wifi_password])    display_message = {"wifi":"configured", "wifi_ssid":wifi_ssid, "wifi_password": wifi_password}    resp = jsonify(display_message)    resp.status_code = 200    return resp   Now the script wifi_search is called which stores the wifi credentials in the wifi_config file using the following command   cat >> /etc/wpa_supplicant/wpa_supplicant.conf <<EOF network={    ssid="$SSID"    psk="$PSK" } EOF   3. /auth This endpoint takes the SUSI’s login credentials as parameters, i.e. the registered email id and the corresponding password.   def login():    auth = request.args.get('auth')    email = request.args.get('email')    password = request.args.get('password')    subprocess.call(['sudo', 'bash', access_point_folder + '/login.sh', auth, email, password])    display_message = {"authentication":"successful", "auth": auth, "email": email, "password": password}    resp = jsonify(display_message)    resp.status_code = 200    return resp   4. /config Finally, this endpoint takes the stt, tts, hotword detection engine and wake button as the parameters and configures the speaker accordingly.   def config():    stt = request.args.get('stt')    tts = request.args.get('tts')    hotword = request.args.get('hotword')    wake = request.args.get('wake')    subprocess.Popen(['sudo', 'bash', access_point_folder + '/config.sh ', stt, tts, hotword, wake])    display_message = {"configuration":"successful", "stt": stt, "tts": tts, "hotword": hotword, "wake":wake}    resp = jsonify(display_message)    resp.status_code = 200    return resp   Now, this function runs a script called config.sh which in turn runs a script called rwap.sh to convert the Raspberry Pi to normal mode and then finally start SUSI on startup.   #!/bin/bash if [ "$EUID" -ne 0 ] then echo "Must be root" exit fi cd /etc/hostapd/ sed -i '1,14d' hostapd.conf cd /etc/ sed -i '57,60d' dhcpcd.conf cd /etc/network/ sed -i '9,17d' interfaces echo "Please reboot" sudo reboot   After successfully hitting all the endpoint from the client, your Smart Speaker would restart and would see the following screen on your client.   References https://github.com/fossasia/susi_linux https://raspberrypi.stackexchange.com/questions/10251/prepare-sd-card-for-wifi-on-headless-pi http://flask.pocoo.org/docs/1.0/ Additional Resources To read more about bash scripting regarding Wifi on RasPi , read the following discussion: https://www.raspberrypi.org/forums/viewtopic.php?t=116023 To learn more about shell scripting in general: https://www.shellscript.sh/ To contribute more to our repo , proceed here https://github.com/fossasia/susi_linux https://github.com/fossasia/susi_ios…

Continue ReadingConnecting the Smart Speaker with Mobile Clients

Modifying Finite State Architecture On SUSI Linux to Process Multiple Queries

During the initial stages of SUSI Linux: As the code base grew, it was getting very difficult to manage code, so we opted to implement a Finite State Architecture in our repo. But, as there were new features implemented in the Repo, we realized that we couldn’t process more than one query at a time which restricted a lot of features. eg. The smart speaker was converted to a simple Bluetooth speaker since no response regarding playing/pausing were accepted. To solve this issue, we made a slight modification in the architecture. Brief About SUSI States SUSI is working as a Finite State Machine and is present in 3 states namely IDLE state, Recognising state and Busy state. The State Machine executes in the following order. IDLE State: When the SUSI state Machine is in this State, SUSI is searching for the hotword “SUSI”, waiting to trigger the complete Machine. Recognizing State In this State , the State Machine has started the STT client. After recognition, SUSI sends the query to the Server awaiting the response Busy State After the response has been received, the TTS client is triggered and the answer is given out by SUSI Adding a Second Hotword Recognition Class Now, to allow SUSI to process the second query, The State machine must be triggered while SUSI is giving out the first response and to trigger the State Machine, we must have hotword recognition while SUSI is speaking the answer to the previous query. Hence, a hotword recognition engine is now initiated every time the State Machine enters the busy state. We will be using Snowboy as Hotword Detection Engine.   import os TOP_DIR = os.path.dirname(os.path.abspath(__file__)) RESOURCE_FILE = os.path.join(TOP_DIR, "susi.pmdl") class StopDetector():    """This implements the Stop Detection with Snowboy Hotword Detection Engine."""     def __init__(self, detection) -> None:        super().__init__()        self.detector = snowboydecoder.HotwordDetector(            RESOURCE_FILE, sensitivity=0.6)        self.detection = detection     def run(self):        """ Implementation of run abstract method in HotwordDetector. This method is called when thread is started for the first time. We start the Snowboy detection and declare detected callback as        detection_callback method declared ina parent class.        """        self.detector.start(detected_callback=self.detection)   Now, this class takes the Callback function as a parameter which is passed when the transition to busy state takes place from the recognition state.   Modifying the State Machine Architecture After declaring a second hotword recognition engine , we must modify how the transitions take place between the States of the SUSI State Machine. Hence the callback that will be triggered is passed from the busy state.   def detection(self):        """This callback is fired when a Hotword Detector detects a hotword.        :return: None        """        if hasattr(self, 'video_process'):            self.video_process.send_signal(signal.SIGSTOP)            lights.wakeup()            subprocess.Popen(['play', str(self.components.config['detection_bell_sound'])])            lights.off()            self.transition(self.allowedStateTransitions.get('recognizing'))            self.video_process.send_signal(signal.SIGCONT)        if hasattr(self, 'audio_process'):            self.audio_process.send_signal(signal.SIGSTOP)              lights.wakeup()            subprocess.Popen(['play', str(self.components.config['detection_bell_sound'])])            lights.wakeup()            self.transition(self.allowedStateTransitions.get('recognizing'))            self.audio_process.send_signal(signal.SIGCONT)   As soon as the hotword is detected ,the state machine makes transitions to the Recognition State while pausing the current Music and resumes the Music after the second query has been completed.   This is how SUSI processes multiple queries simultaneously while still maintaining…

Continue ReadingModifying Finite State Architecture On SUSI Linux to Process Multiple Queries