Connecting SUSI iOS App to SUSI Smart Speaker

SUSI Smart Speaker is an Open Source speaker with many exciting features. The user needs an Android or iOS device to set up the speaker. You can refer this post for initial connection to SUSI Smart Speaker. In this post, we will see how a user can connect SUSI Smart Speaker to iOS devices (iPhone/iPad). Implementation - The first step is to detect whether an iOS device connects to SUSI.AI hotspot or not. For this, we match the currently connected wifi SSID with SUSI.AI hotspot SSID. If it matches, we show the connected device in Device Activity to proceed further with setups. Choosing Room - Room name is basically the location of your SUSI Smart Speaker in the home. You may have multiple SUSI Smart Speaker in different rooms, so the purpose of adding the room is to differentiate between them. When the user clicks on Wi-Fi displayed cell, it starts the initial setups. We are using didSelectRowAt method of UITableViewDelegate to get which cell is selected. On clicking the displayed Wi-Fi cell, a popup is open with a Room Location Text field. override func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) { if indexPath.row == 0, let speakerSSID = fetchSSIDInfo(), speakerSSID == ControllerConstants.DeviceActivity.susiSSID { // Open a popup to select Rooms presentRoomsPopup() } } When the user clicks the Next button, we send the speaker room location to the local server of the speaker by the following API endpoint with room name as a parameter: http://10.0.0.1:5000/speaker_config/ Refer this post for getting more detail about how choosing room work and how it is implemented in SUSI iOS. Sharing Wi-Fi Credentials - On successfully choosing the room, we present a popup that asks the user to enter the Wi-Fi credentials of previously connected Wi-Fi so that we can connect our Smart Speaker to the wifi which can provide internet connection to play music and set commands over the speaker. We present a popup with a text field for entering wifi password. When the user clicks the Next button, we share the wifi credentials to wifi by the following API endpoint: http://10.0.0.1:5000/wifi_credentials/ With the following params- Wifissid - Connected Wi-Fi SSID Wifipassd - Connected Wi-Fi password In this API endpoint, we are sharing wifi SSID and wifi password with Smart Speaker. If the credentials successfully accepted by speaker than we present a popup for user SUSI account password, otherwise we again present Enter Wifi Credentials popup. Client.sharedInstance.sendWifiCredentials(params) { (success, message) in DispatchQueue.main.async { self.alertController.dismiss(animated: true, completion: nil) if success { self.presentUserPasswordPopup() } else { self.view.makeToast("", point: self.view.center, title: message, image: nil, completion: { didTap in UIApplication.shared.endIgnoringInteractionEvents() self.presentWifiCredentialsPopup() }) } } }   Sharing SUSI Account Credentials - In the method above we have seen that when SUSI Smart Speaker accept the wifi credentials, we proceed further with SUSI account credentials. We open a popup to Enter user’s SUSI account password: When the user clicks the Next button, we use following API endpoint to share user’s SUSI account credentials to SUSI Smart Speaker: http://10.0.0.1:5000/auth/…

Continue ReadingConnecting SUSI iOS App to SUSI Smart Speaker

Configuring LED Lights with SUSI Smart Speaker

To make the SUSI Smart Speaker more interactive and to improve the visual aesthetics, we configured SUSI Smart Speaker’s response with 3 RGB led lights. We have used a new piHat as an external hardware to configure the LEDs. Now the new hardware specs of the SUSI Smart Speaker are: Raspberry Pi ReSpeaker PiHat 2 Mic Array External Speakers Using an external PiHat not only added the RGB light functionality but also eliminated the need to use a USB microphone and configured a factory reset button Configuring the PiHat as the default Audio driver To Use the PiHat as the default input driver, we use the package called PulseAudio. And we use the following command in the installation script. pacmd set-sink-port alsa_output.platform-soc_sound.analog-stereo analog-output-headphones Configuring PiHat’s GPIO Button with Factory Reset There is an onboard User Button, which is connected to GPIO17. We use the python library RPi.GPIO to detect the user button. The python script is used in the following way GPIO.setmode(GPIO.BCM) GPIO.setup(17,GPIO.IN) i = 1 while True: if GPIO.input(17) == 1:        time.sleep(0.1)        pass    elif GPIO.input(17) == 0 :        start = time.time()        while GPIO.input(17) == 0 :            time.sleep(0.1)        end = time.time()        total = end - start        if total >= 7 :            subprocess.call(['bash','factory_reset.sh'])  # nosec #pylint-disable type: ignore        else :            mixer = alsaaudio.Mixer()            value = mixer.getvolume()[0]            if value != 0:                mixer.setvolume(0)            else:                mixer.setvolume(50)        print(total)        time.sleep(0.1)   This script checks on the button which is configured on GPIO port 17 on the PiHat. If the button is pressed for than 7 secs, the factory reset process takes place, else the device is muted. Configuring PiHat’s LED with Speaker’s Response We use a python library called SPIDEV to sync the LED lights with SUSI’s response. SPIDEV is usually used to send a response to the bus devices on the Raspberry Pi. The first step was installing spidev sudo pip install spidev Now we create a class where we store all the methods where we send the signal to the bus port. We treat the LED lights as a circular array and then have a rotation of RGB lights class LED_COLOR:     # Constants    MAX_BRIGHTNESS = 0b11111    LED_START = 0b11100000     def __init__(self, num_led, global_brightness=MAX_BRIGHTNESS,                 order='rgb', bus=0, device=1, max_speed_hz=8000000):        self.num_led = num_led        order = order.lower()        self.rgb = RGB_MAP.get(order, RGB_MAP['rgb'])        if global_brightness > self.MAX_BRIGHTNESS:            self.global_brightness = self.MAX_BRIGHTNESS        else:            self.global_brightness = global_brightness         self.leds = [self.LED_START, 0, 0, 0] * self.num_led        self.spi = spidev.SpiDev()        self.spi.open(bus, device)        if max_speed_hz:            self.spi.max_speed_hz = max_speed_hz     def clear_strip(self):         for led in range(self.num_led):            self.set_pixel(led, 0, 0, 0)        self.show()     def set_pixel(self, led_num, red, green, blue, bright_percent=100):        if led_num < 0:            return          if led_num >= self.num_led:            return        brightness = int(ceil(bright_percent * self.global_brightness / 100.0))        ledstart = (brightness & 0b00011111) | self.LED_START         start_index = 4 * led_num        self.leds[start_index] = ledstart        self.leds[start_index + self.rgb[0]] = red        self.leds[start_index + self.rgb[1]] = green        self.leds[start_index + self.rgb[2]] = blue     def set_pixel_rgb(self, led_num, rgb_color, bright_percent=100):        self.set_pixel(led_num, (rgb_color & 0xFF0000) >> 16,                       (rgb_color & 0x00FF00) >> 8, rgb_color & 0x0000FF, bright_percent)     def rotate(self, positions=1):        cutoff = 4 * (positions % self.num_led)        self.leds = self.leds[cutoff:]…

Continue ReadingConfiguring LED Lights with SUSI Smart Speaker

Using SUSI.AI Accounting Model to store Device information on Server

Just like with Google Home devices and Amazon Alexa devices, SUSI.AI Users should have a way to add and store the information of their devices (Smart Speakers) on SUSI Server, so that it could be displayed to them on various clients. Hence, we needed a Servlet which could add and store the User data on the Server. Implementation of Servlets in SUSI.AI All servlets in SUSI extend AbstractAPIHandler class and implement APIHandler. All servlets have 4 methods, which we overwrite depending on what we want the servlet to do. They are as follows : @Override public String getAPIPath() { return null; } @Override public BaseUserRole getMinimalBaseUserRole() { return null; } @Override public JSONObject getDefaultPermissions(BaseUserRole baseUserRole) { return null; } @Override public ServiceResponse serviceImpl(Query post, HttpServletResponse response, Authorization rights, JsonObjectWithDefault permissions) throws APIException { Return null; }   How these 4 methods work together? First method is getAPIPath(). It returns the endpoint of the servlet. The second method is getMinimalBaseUserRole(). It returns the minimum privilege level required to access the endpoint. The third method is getDefaultPermissions(). It gets the Default Permissions of a UserRole in SUSI Server. Different UserRoles have different permissions defined in SUSI Server. Whenever the endpoint defined in the getAPIPath() method is called properly, it responds with whatever is defined in the fourth method, which is serviceImpl(). How is User data stored on SUSI Server? Before we move on to the actual implementation of the API required to store the device information, we should get a brief idea of how exactly User data is stored on SUSI Server. Every SUSI User has an Accounting Object. It is a JSONObject which stores the Settings of the particular User on the Server. For example, every time you change some setting on the Web Client https://chat.susi.ai, an API call is made to aaa/changeUserSettings.json with appropriate parameters with information of the changed setting, and the User settings are stored to the Server in the Accounting Object of the User. Implementation of a Servlet to store Device information on Server The task of this servlet is to store the information of a new device (Smart speaker) whenever it is initially set up using the Android/iOS app. This is the implementation of the 4 methods of a servlet which is used to store information of connected devices: @Override public String getAPIPath() { return "/aaa/addNewDevice.json"; } @Override public UserRole getMinimalUserRole() { return UserRole.USER; } @Override public JSONObject getDefaultPermissions(UserRole baseUserRole) { return null; } @Override public ServiceResponse serviceImpl(Query query, HttpServletResponse response, Authorization authorization, JsonObjectWithDefault permissions) throws APIException { JSONObject value = new JSONObject(); String key = query.get("macid", null); String name = query.get("name", null); String device = query.get("device", null); if (key == null || name == null || device == null) { throw new APIException(400, "Bad service call, missing arguments"); } else { value.put(name, device); } if (authorization.getIdentity() == null) { throw new APIException(400, "Specified user data not found, ensure you are logged in"); } else { Accounting accounting = DAO.getAccounting(authorization.getIdentity()); if (accounting.getJSON().has("devices")) { accounting.getJSON().getJSONObject("devices").put(key, value); }…

Continue ReadingUsing SUSI.AI Accounting Model to store Device information on Server

Filling Audio Buffer to Generate Waves in the PSLab Android App

The PSLab Android App works as an oscilloscope and a wave generator using the audio jack of the Android device. The implementation of the oscilloscope in the Android device using the in-built mic has been discussed in the blog post “Using the Audio Jack to make an Oscilloscope in the PSLab Android App” and the same has been discussed in the context of wave generator in the blog post “Implement Wave Generation Functionality in the PSLab Android App”. This post is a continuation of the post related to the implementation of wave generation functionality in the PSLab Android App. In this post, the subject matter of discussion is the way to fill the audio buffer so that the resulting wave generated is either a Sine Wave, a Square Wave or a Sawtooth Wave. The resultant audio buffer would be played using the AudioTrack API of Android to generate the corresponding wave. The waves we are trying to generate are periodic waves. Periodic Wave: A wave whose displacement has a periodic variation with respect to time or distance, or both. Thus, the problem reduces to generating a pulse which will constitute a single time period of the wave. Suppose we want to generate a sine wave; if we generate a continuous stream of pulses as illustrated in the image below, we would get a continuous sine wave. This is the main concept that we shall try to implement using code. Initialise AudioTrack Object AudioTrack object is initialised using the following parameters: STREAM TYPE: Type of stream like STREAM_SYSTEM, STREAM_MUSIC, STREAM_RING, etc. For wave generation purposes we are using stream music. Every stream has its own maximum and minimum volume level.   SAMPLING RATE: It is the rate at which the source samples the audio signal. BUFFER SIZE IN BYTES: Total size of the internal buffer in bytes from where the audio data is read for playback. MODES: There are two modes- MODE_STATIC: Audio data is transferred from Java to the native layer only once before the audio starts playing. MODE_STREAM: Audio data is streamed from Java to the native layer as audio is being played. getMinBufferSize() returns the estimated minimum buffer size required for an AudioTrack object to be created in the MODE_STREAM mode. minTrackBufferSize = AudioTrack.getMinBufferSize(SAMPLING_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT); audioTrack = new AudioTrack(       AudioManager.STREAM_MUSIC,       SAMPLING_RATE,       AudioFormat.CHANNEL_OUT_MONO,       AudioFormat.ENCODING_PCM_16BIT,       minTrackBufferSize,       AudioTrack.MODE_STREAM); Fill Audio Buffer to Generate Sine Wave Depending on the values in the audio buffer, the wave is generated by the AudioTrack object. Therefore, to generate a specific kind of wave, we need to fill the audio buffer with some specific values. The values are governed by the wave equation of the signal that we want to generate. public short[] createBuffer(int frequency) {   short[] buffer = new short[minTrackBufferSize];   double f = frequency;   double q = 0;   double level = 16384;   final double K = 2.0 * Math.PI / SAMPLING_RATE;   for (int i = 0; i < minTrackBufferSize; i++) {         f += (frequency - f) / 4096.0;         q += (q < Math.PI) ?…

Continue ReadingFilling Audio Buffer to Generate Waves in the PSLab Android App

Implement Wave Generation Functionality in The PSLab Android App

The PSLab Android App works as an Oscilloscope using the audio jack of Android device. The implementation for the scope using in-built mic is discussed in the post Using the Audio Jack to make an Oscilloscope in the PSLab Android App. Another application which can be implemented by hacking the audio jack is Wave Generation. We can generate different types of signals on the wires connected to the audio jack using the Android APIs that control the Audio Hardware. In this post, I will discuss about how we can generate wave by using the Android APIs for controlling the audio hardware. Configuration of Audio Jack for Wave Generation Simply cut open the wire of a cheap pair of earphones to gain control of its terminals and attach alligator pins by soldering or any other hack(jugaad) that you can think of. After you are done with the tinkering of the earphone jack, it should look something like shown in the image below. If your earphones had mic, it would have an extra wire for mic input. In any general pair of earphones the wire configuration is almost the same as shown in the image below. Android APIs for Controlling Audio Hardware AudioRecord and AudioTrack are the two classes in Android that manages recording and playback respectively. For Wave Generation application we only need AudioTrack class. Creating an AudioTrack object: We need the following parameters to initialise an AudioTrack object. STREAM TYPE: Type of stream like STREAM_SYSTEM, STREAM_MUSIC, STREAM_RING, etc. For wave generation purpose we are using stream music. Every stream has its own maximum and minimum volume level. SAMPLING RATE: it is the rate at which source samples the audio signal. BUFFER SIZE IN BYTES: total size in bytes of the internal buffer from where the audio data is read for playback. MODES: There are two modes MODE_STATIC: Audio data is transferred from Java to native layer only once before the audio starts playing. MODE_STREAM: Audio data is streamed from Java to native layer as audio is being played. getMinBufferSize() returns the estimated minimum buffer size required for an AudioTrack object to be created in the MODE_STREAM mode. private int minTrackBufferSize; private static final int SAMPLING_RATE = 44100; minTrackBufferSize = AudioTrack.getMinBufferSize(SAMPLING_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT); audioTrack = new AudioTrack(       AudioManager.STREAM_MUSIC,       SAMPLING_RATE,       AudioFormat.CHANNEL_OUT_MONO,       AudioFormat.ENCODING_PCM_16BIT,       minTrackBufferSize,       AudioTrack.MODE_STREAM); Function createBuffer() creates the audio buffer that is played using the audio track object i.e audio track object would write this buffer on playback stream. Function below fills random values in the buffer due to which a random signal is generated. If we want to generate some specific wave like Square Wave, Sine Wave, Triangular Wave, we have to fill the buffer accordingly. public short[] createBuffer(int frequency) {   // generating a random buffer for now   short[] buffer = new short[minTrackBufferSize];   for (int i = 0; i < minTrackBufferSize; i++) {       buffer[i] = (short) (random.nextInt(32767) + (-32768));   }   return buffer; } We created a write() method and passed the audio buffer created in above step as an argument to…

Continue ReadingImplement Wave Generation Functionality in The PSLab Android App