Connecting SUSI iOS App to SUSI Smart Speaker

SUSI Smart Speaker is an Open Source speaker with many exciting features. The user needs an Android or iOS device to set up the speaker. You can refer this post for initial connection to SUSI Smart Speaker. In this post, we will see how a user can connect SUSI Smart Speaker to iOS devices (iPhone/iPad).

Implementation –

The first step is to detect whether an iOS device connects to SUSI.AI hotspot or not. For this, we match the currently connected wifi SSID with SUSI.AI hotspot SSID. If it matches, we show the connected device in Device Activity to proceed further with setups.

Choosing Room –

Room name is basically the location of your SUSI Smart Speaker in the home. You may have multiple SUSI Smart Speaker in different rooms, so the purpose of adding the room is to differentiate between them.

When the user clicks on Wi-Fi displayed cell, it starts the initial setups. We are using didSelectRowAt method of UITableViewDelegate to get which cell is selected. On clicking the displayed Wi-Fi cell, a popup is open with a Room Location Text field.

override func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) {
if indexPath.row == 0, let speakerSSID = fetchSSIDInfo(), speakerSSID == ControllerConstants.DeviceActivity.susiSSID {
// Open a popup to select Rooms
presentRoomsPopup()
}
}

When the user clicks the Next button, we send the speaker room location to the local server of the speaker by the following API endpoint with room name as a parameter:

http://10.0.0.1:5000/speaker_config/

Refer this post for getting more detail about how choosing room work and how it is implemented in SUSI iOS.

Sharing Wi-Fi Credentials –

On successfully choosing the room, we present a popup that asks the user to enter the Wi-Fi credentials of previously connected Wi-Fi so that we can connect our Smart Speaker to the wifi which can provide internet connection to play music and set commands over the speaker.

We present a popup with a text field for entering wifi password.

When the user clicks the Next button, we share the wifi credentials to wifi by the following API endpoint:

http://10.0.0.1:5000/wifi_credentials/

With the following params-

  1. Wifissid – Connected Wi-Fi SSID
  2. Wifipassd – Connected Wi-Fi password

In this API endpoint, we are sharing wifi SSID and wifi password with Smart Speaker. If the credentials successfully accepted by speaker than we present a popup for user SUSI account password, otherwise we again present Enter Wifi Credentials popup.

Client.sharedInstance.sendWifiCredentials(params) { (success, message) in
DispatchQueue.main.async {
self.alertController.dismiss(animated: true, completion: nil)
if success {
self.presentUserPasswordPopup()
} else {
self.view.makeToast("", point: self.view.center, title: message, image: nil, completion: { didTap in
UIApplication.shared.endIgnoringInteractionEvents()
self.presentWifiCredentialsPopup()
})
}
}
}

 

Sharing SUSI Account Credentials –

In the method above we have seen that when SUSI Smart Speaker accept the wifi credentials, we proceed further with SUSI account credentials. We open a popup to Enter user’s SUSI account password:

When the user clicks the Next button, we use following API endpoint to share user’s SUSI account credentials to SUSI Smart Speaker:

http://10.0.0.1:5000/auth/

With the following params-

  1. email
  2. password

User email is already saved in the device so the user doesn’t have to type it again. If the user credentials successfully accepted by speaker then we proceed with configuration process otherwise we open up Enter Password popup again.

Client.sharedInstance.sendAuthCredentials(params) { (success, message) in
DispatchQueue.main.async {
self.alertController.dismiss(animated: true, completion: nil)
if success {
self.setConfiguration()
} else {
self.view.makeToast("", point: self.view.center, title: message, image: nil, completion: { didTap in
UIApplication.shared.endIgnoringInteractionEvents()
self.presentUserPasswordPopup()
})
}
}
}

 

Setting Configuration –

After successfully sharing SUSI account credentials, following API endpoint is using for setting configuration.

http://10.0.0.1:5000/config/

With the following params-

  1. sst
  2. tts
  3. hotword
  4. wake

The success of this API call makes successfully connection between user iOS Device and SUSI Smart Speaker.

Client.sharedInstance.setConfiguration(params) { (success, message) in
DispatchQueue.main.async {
if success {
// Successfully Configured
self.isSetupDone = true
self.view.makeToast(ControllerConstants.DeviceActivity.doneSetupDetailText)
} else {
self.view.makeToast("", point: self.view.center, title: message, image: nil, completion: { didTap in
UIApplication.shared.endIgnoringInteractionEvents()
})
}
}
}

After successful connection-

 

Resources –

  1. Apple’s Documentation of tableView(_:didSelectRowAt:) API
  2. Initial Setups for Connecting SUSI Smart Speaker with iPhone/iPad
  3. SUSI Linux Link: https://github.com/fossasia/susi_linux
  4. Adding Option to Choose Room for SUSI Smart Speaker in iOS App
Continue ReadingConnecting SUSI iOS App to SUSI Smart Speaker

Configuring LED Lights with SUSI Smart Speaker

To make the SUSI Smart Speaker more interactive and to improve the visual aesthetics, we configured SUSI Smart Speaker’s response with 3 RGB led lights. We have used a new piHat as an external hardware to configure the LEDs.

Now the new hardware specs of the SUSI Smart Speaker are:

  1. Raspberry Pi
  2. ReSpeaker PiHat 2 Mic Array
  3. External Speakers

Using an external PiHat not only added the RGB light functionality but also eliminated the need to use a USB microphone and configured a factory reset button

Configuring the PiHat as the default Audio driver

To Use the PiHat as the default input driver, we use the package called PulseAudio.

And we use the following command in the installation script.

pacmd set-sink-port alsa_output.platform-soc_sound.analog-stereo analog-output-headphones

Configuring PiHat’s GPIO Button with Factory Reset

There is an onboard User Button, which is connected to GPIO17. We use the python library RPi.GPIO to detect the user button. The python script is used in the following way

GPIO.setmode(GPIO.BCM)
GPIO.setup(17,GPIO.IN)
i = 1
while True:
if GPIO.input(17) == 1:
       time.sleep(0.1)
       pass
   elif GPIO.input(17) == 0 :
       start = time.time()
       while GPIO.input(17) == 0 :
           time.sleep(0.1)
       end = time.time()
       total = end – start
       if total >= 7 :
           subprocess.call([‘bash’,‘factory_reset.sh’])  # nosec #pylint-disable type: ignore
       else :
           mixer = alsaaudio.Mixer()
           value = mixer.getvolume()[0]
           if value != 0:
               mixer.setvolume(0)
           else:
               mixer.setvolume(50)
       print(total)
       time.sleep(0.1)

 

This script checks on the button which is configured on GPIO port 17 on the PiHat. If the button is pressed for than 7 secs, the factory reset process takes place, else the device is muted.

Configuring PiHat’s LED with Speaker’s Response

We use a python library called SPIDEV to sync the LED lights with SUSI’s response. SPIDEV is usually used to send a response to the bus devices on the Raspberry Pi.

The first step was installing spidev

sudo pip install spidev

Now we create a class where we store all the methods where we send the signal to the bus port. We treat the LED lights as a circular array and then have a rotation of RGB lights

class LED_COLOR:
    # Constants
   MAX_BRIGHTNESS = 0b11111    LED_START = 0b11100000
    def __init__(self, num_led, global_brightness=MAX_BRIGHTNESS,
                order=‘rgb’, bus=0, device=1, max_speed_hz=8000000):
       self.num_led = num_led
       order = order.lower()
       self.rgb = RGB_MAP.get(order, RGB_MAP[‘rgb’])
       if global_brightness > self.MAX_BRIGHTNESS:
           self.global_brightness = self.MAX_BRIGHTNESS
       else:
           self.global_brightness = global_brightness
        self.leds = [self.LED_START, 0, 0, 0] * self.num_led
       self.spi = spidev.SpiDev()
       self.spi.open(bus, device)
       if max_speed_hz:
           self.spi.max_speed_hz = max_speed_hz
    def clear_strip(self):
        for led in range(self.num_led):
           self.set_pixel(led, 0, 0, 0)
       self.show()
    def set_pixel(self, led_num, red, green, blue, bright_percent=100):
       if led_num < 0:
           return          if led_num >= self.num_led:
           return
       brightness = int(ceil(bright_percent * self.global_brightness / 100.0))
       ledstart = (brightness & 0b00011111) | self.LED_START
        start_index = 4 * led_num
       self.leds[start_index] = ledstart
       self.leds[start_index + self.rgb[0]] = red
       self.leds[start_index + self.rgb[1]] = green
       self.leds[start_index + self.rgb[2]] = blue
    def set_pixel_rgb(self, led_num, rgb_color, bright_percent=100):
       self.set_pixel(led_num, (rgb_color & 0xFF0000) >> 16,
                      (rgb_color & 0x00FF00) >> 8, rgb_color & 0x0000FF, bright_percent)
    def rotate(self, positions=1):
       cutoff = 4 * (positions % self.num_led)
       self.leds = self.leds[cutoff:] + self.leds[:cutoff]
    def show(self):
       data = list(self.leds)
       while data:
           self.spi.xfer2(data[:32])
           data = data[32:]
       self.clock_end_frame()
    def cleanup(self):
       self.spi.close()  # Close SPI port
    def wheel(self, wheel_pos):
       “””Get a color from a color wheel; Green -> Red -> Blue -> Green”””
        if wheel_pos > 255:
           wheel_pos = 255  # Safeguard
       if wheel_pos < 85:  # Green -> Red
           return self.combine_color(wheel_pos * 3, 255 – wheel_pos * 3, 0)
       if wheel_pos < 170:  # Red -> Blue
           wheel_pos -= 85
           return self.combine_color(255 – wheel_pos * 3, 0, wheel_pos * 3)
       wheel_pos -= 170
       return self.combine_color(0, wheel_pos * 3, 255 – wheel_pos * 3)

 

Now we use the threading to create non-blocking code which will allow SUSI to send response as well as change the LED’s simultaneously.

class Lights:
   LIGHTS_N = 3
    def __init__(self):
        self.next = threading.Event()
       self.queue = Queue.Queue()
       self.thread = threading.Thread(target=self._run)
       self.thread.daemon = True
       self.thread.start()
    def wakeup(self, direction=0):
       def f():
           self._wakeup(direction)
        self.next.set()
       self.queue.put(f)
    def listen(self):
       self.next.set()
       self.queue.put(self._listen)
    def think(self):
       self.next.set()
       self.queue.put(self._think)
    def speak(self):
       self.next.set()
       self.queue.put(self._speak)
    def off(self):
       self.next.set()
       self.queue.put(self._off)

This is how LED lights are configured with SUSI’s response

Resources

Additional Resources

Continue ReadingConfiguring LED Lights with SUSI Smart Speaker

Using SUSI.AI Accounting Model to store Device information on Server

Just like with Google Home devices and Amazon Alexa devices, SUSI.AI Users should have a way to add and store the information of their devices (Smart Speakers) on SUSI Server, so that it could be displayed to them on various clients. Hence, we needed a Servlet which could add and store the User data on the Server.

Implementation of Servlets in SUSI.AI

All servlets in SUSI extend AbstractAPIHandler class and implement APIHandler. All servlets have 4 methods, which we overwrite depending on what we want the servlet to do. They are as follows :

@Override
    public String getAPIPath() {
        return null;
    }

@Override
    public BaseUserRole getMinimalBaseUserRole() {
        return null;
    }

@Override
    public JSONObject getDefaultPermissions(BaseUserRole baseUserRole) {
        return null;
    }

@Override
    public ServiceResponse serviceImpl(Query post, HttpServletResponse response, Authorization rights, JsonObjectWithDefault permissions) throws APIException {
        Return null;
    }

 

How these 4 methods work together?

  1. First method is getAPIPath(). It returns the endpoint of the servlet.
  2. The second method is getMinimalBaseUserRole(). It returns the minimum privilege level required to access the endpoint.
  3. The third method is getDefaultPermissions(). It gets the Default Permissions of a UserRole in SUSI Server. Different UserRoles have different permissions defined in SUSI Server.
  4. Whenever the endpoint defined in the getAPIPath() method is called properly, it responds with whatever is defined in the fourth method, which is serviceImpl().

How is User data stored on SUSI Server?

Before we move on to the actual implementation of the API required to store the device information, we should get a brief idea of how exactly User data is stored on SUSI Server.

Every SUSI User has an Accounting Object. It is a JSONObject which stores the Settings of the particular User on the Server. For example, every time you change some setting on the Web Client https://chat.susi.ai, an API call is made to aaa/changeUserSettings.json with appropriate parameters with information of the changed setting, and the User settings are stored to the Server in the Accounting Object of the User.

Implementation of a Servlet to store Device information on Server

The task of this servlet is to store the information of a new device (Smart speaker) whenever it is initially set up using the Android/iOS app.

This is the implementation of the 4 methods of a servlet which is used to store information of connected devices:

    @Override
    public String getAPIPath() {
        return "/aaa/addNewDevice.json";
    }

    @Override
    public UserRole getMinimalUserRole() {
        return UserRole.USER;
    }

    @Override
    public JSONObject getDefaultPermissions(UserRole baseUserRole) {
        return null;
    }

    @Override
   public ServiceResponse serviceImpl(Query query, HttpServletResponse response, Authorization authorization, JsonObjectWithDefault permissions) throws APIException {

           JSONObject value = new JSONObject();
           
           String key = query.get("macid", null);
           String name = query.get("name", null);
           String device = query.get("device", null);
               
           if (key == null || name == null || device == null) {
               throw new APIException(400, "Bad service call, missing arguments");
           } else {
               value.put(name, device);
           }
           
           if (authorization.getIdentity() == null) {
               throw new APIException(400, "Specified user data not found, ensure you are logged in");
           } 
           else {
                Accounting accounting = DAO.getAccounting(authorization.getIdentity());
                if (accounting.getJSON().has("devices")) {
                       accounting.getJSON().getJSONObject("devices").put(key, value);
                   } 
                else {
                       JSONObject jsonObject = new JSONObject();
                       jsonObject.put(key, value);
                       accounting.getJSON().put("devices", jsonObject);
                   }
               
               JSONObject result = new JSONObject(true);
               result.put("accepted", true);
               result.put("message", "You have successfully added the device!");
               return new ServiceResponse(result);
           }
    }

 

As it can be seen from the above code, the endpoint for this servlet is /aaa/addNewDevice.json and it accepts 3 parameters –

  • macid : Mac Address of the device
  • name : Name of the device
  • device : Additional information which you want to send about the device (subject to changes)

As the main task of this servlet is user specific, and should only be accessible to the particular user, hence we returned UserRole as USER in the getMinimalUserRole() method.

In the serviceImpl() method, first we extract the value of the URL query parameters and store them in variables. If any of the parameters is missing, we display an error response code of 400 with error message “Bad service call, missing arguments”. If query parameters are fine, we store the name and device values in a new JSONObject, value in this case.

An if-else statement then checks for whether the User is logged in or not, using the authorization.getIdentity(), which is a function which returns the identity of the User. The implementation of this function is in Accounting.java file, and is as follows :

    public ClientIdentity getIdentity() {
        return identity;
    }

 

If the User is logged in, getAccounting() function of DAO.java file is called which returns an Accounting object according to the following implementation of the function :

    public static Accounting getAccounting(@Nonnull ClientIdentity identity) {
        return new Accounting(identity, accounting);
    }

 

Our device information is then stored in this Accounting object in the following format :

{
  "lastLoginIP": "162.158.166.19",
  "accepted": true,
  "message": "Success: Showing user data",
  "session": {"identity": {
    "type": "email",
    "name": "myemail@gmail.com",
    "anonymous": false
  }},
  "settings": {
    "customThemeValue": "4285f4,f5f4f6,fff,f5f4f6,fff,4285f4,",
    "theme": "light"
  },
  "devices": {
    "MacID2": {"MyDevice": "SmartSpeaker"},
    "MacID1": {"Name of device": "SmartSpeaker"}
  }
}

 

Resources

Continue ReadingUsing SUSI.AI Accounting Model to store Device information on Server

Filling Audio Buffer to Generate Waves in the PSLab Android App

The PSLab Android App works as an oscilloscope and a wave generator using the audio jack of the Android device. The implementation of the oscilloscope in the Android device using the in-built mic has been discussed in the blog post “Using the Audio Jack to make an Oscilloscope in the PSLab Android App” and the same has been discussed in the context of wave generator in the blog post “Implement Wave Generation Functionality in the PSLab Android App”. This post is a continuation of the post related to the implementation of wave generation functionality in the PSLab Android App. In this post, the subject matter of discussion is the way to fill the audio buffer so that the resulting wave generated is either a Sine Wave, a Square Wave or a Sawtooth Wave. The resultant audio buffer would be played using the AudioTrack API of Android to generate the corresponding wave. The waves we are trying to generate are periodic waves.

Periodic Wave: A wave whose displacement has a periodic variation with respect to time or distance, or both.

Thus, the problem reduces to generating a pulse which will constitute a single time period of the wave. Suppose we want to generate a sine wave; if we generate a continuous stream of pulses as illustrated in the image below, we would get a continuous sine wave. This is the main concept that we shall try to implement using code.

Initialise AudioTrack Object

AudioTrack object is initialised using the following parameters:

  • STREAM TYPE: Type of stream like STREAM_SYSTEM, STREAM_MUSIC, STREAM_RING, etc. For wave generation purposes we are using stream music. Every stream has its own maximum and minimum volume level.  
  • SAMPLING RATE: It is the rate at which the source samples the audio signal.
  • BUFFER SIZE IN BYTES: Total size of the internal buffer in bytes from where the audio data is read for playback.
  • MODES: There are two modes-
    • MODE_STATIC: Audio data is transferred from Java to the native layer only once before the audio starts playing.
    • MODE_STREAM: Audio data is streamed from Java to the native layer as audio is being played.

getMinBufferSize() returns the estimated minimum buffer size required for an AudioTrack object to be created in the MODE_STREAM mode.

minTrackBufferSize = AudioTrack.getMinBufferSize(SAMPLING_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
audioTrack = new AudioTrack(
       AudioManager.STREAM_MUSIC,
       SAMPLING_RATE,
       AudioFormat.CHANNEL_OUT_MONO,
       AudioFormat.ENCODING_PCM_16BIT,
       minTrackBufferSize,
       AudioTrack.MODE_STREAM);

Fill Audio Buffer to Generate Sine Wave

Depending on the values in the audio buffer, the wave is generated by the AudioTrack object. Therefore, to generate a specific kind of wave, we need to fill the audio buffer with some specific values. The values are governed by the wave equation of the signal that we want to generate.

public short[] createBuffer(int frequency) {
   short[] buffer = new short[minTrackBufferSize];
   double f = frequency;
   double q = 0;
   double level = 16384;
   final double K = 2.0 * Math.PI / SAMPLING_RATE;

   for (int i = 0; i < minTrackBufferSize; i++) {
         f += (frequency - f) / 4096.0;
         q += (q < Math.PI) ? f * K : (f * K) - (2.0 * Math.PI);
         buffer[i] = (short) Math.round(Math.sin(q));
   }
   return buffer;
}

Fill Audio Buffer to Generate Square Wave

To generate a square wave, let’s assume the time period to be t units. So, we need the amplitude to be equal to A for t/2 units and -A for the next t/2 units. Repeating this pulse continuously, we will get a square wave.

buffer[i] = (short) ((q > 0.0) ? 1 : -1);

Fill Audio Buffer to Generate Sawtooth Wave

Ramp signals increases linearly with time. A Ramp pulse has been illustrated in the image below:

We need repeated ramp pulses to generate a continuous sawtooth wave.

buffer[i] = (short) Math.round((q / Math.PI));

Finally, when the audio buffer is generated, write it to the audio sink for playback using write() method exposed by the AudioTrack object.

audioTrack.write(buffer, 0, buffer.length);

Resources

Continue ReadingFilling Audio Buffer to Generate Waves in the PSLab Android App

Implement Wave Generation Functionality in The PSLab Android App

The PSLab Android App works as an Oscilloscope using the audio jack of Android device. The implementation for the scope using in-built mic is discussed in the post Using the Audio Jack to make an Oscilloscope in the PSLab Android App. Another application which can be implemented by hacking the audio jack is Wave Generation. We can generate different types of signals on the wires connected to the audio jack using the Android APIs that control the Audio Hardware. In this post, I will discuss about how we can generate wave by using the Android APIs for controlling the audio hardware.

Configuration of Audio Jack for Wave Generation

Simply cut open the wire of a cheap pair of earphones to gain control of its terminals and attach alligator pins by soldering or any other hack(jugaad) that you can think of. After you are done with the tinkering of the earphone jack, it should look something like shown in the image below.

Source: edn.com

If your earphones had mic, it would have an extra wire for mic input. In any general pair of earphones the wire configuration is almost the same as shown in the image below.

Source: flickr

Android APIs for Controlling Audio Hardware

AudioRecord and AudioTrack are the two classes in Android that manages recording and playback respectively. For Wave Generation application we only need AudioTrack class.

Creating an AudioTrack object: We need the following parameters to initialise an AudioTrack object.

STREAM TYPE: Type of stream like STREAM_SYSTEM, STREAM_MUSIC, STREAM_RING, etc. For wave generation purpose we are using stream music. Every stream has its own maximum and minimum volume level.

SAMPLING RATE: it is the rate at which source samples the audio signal.

BUFFER SIZE IN BYTES: total size in bytes of the internal buffer from where the audio data is read for playback.

MODES: There are two modes

  • MODE_STATIC: Audio data is transferred from Java to native layer only once before the audio starts playing.
  • MODE_STREAM: Audio data is streamed from Java to native layer as audio is being played.

getMinBufferSize() returns the estimated minimum buffer size required for an AudioTrack object to be created in the MODE_STREAM mode.

private int minTrackBufferSize;
private static final int SAMPLING_RATE = 44100;
minTrackBufferSize = AudioTrack.getMinBufferSize(SAMPLING_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);

audioTrack = new AudioTrack(
       AudioManager.STREAM_MUSIC,
       SAMPLING_RATE,
       AudioFormat.CHANNEL_OUT_MONO,
       AudioFormat.ENCODING_PCM_16BIT,
       minTrackBufferSize,
       AudioTrack.MODE_STREAM);

Function createBuffer() creates the audio buffer that is played using the audio track object i.e audio track object would write this buffer on playback stream. Function below fills random values in the buffer due to which a random signal is generated. If we want to generate some specific wave like Square Wave, Sine Wave, Triangular Wave, we have to fill the buffer accordingly.

public short[] createBuffer(int frequency) {
   // generating a random buffer for now
   short[] buffer = new short[minTrackBufferSize];
   for (int i = 0; i < minTrackBufferSize; i++) {
       buffer[i] = (short) (random.nextInt(32767) + (-32768));
   }
   return buffer;
}

We created a write() method and passed the audio buffer created in above step as an argument to the method. This method writes audio buffer into audio stream for playback.

public void write(short[] buffer) {
   /* write buffer to audioTrack */
   audioTrack.write(buffer, 0, buffer.length);
}

Amplitude of the signal can be controlled by changing the volume level of the stream on which the buffer is being played. As we are playing the audio in music stream, so STREAM_MUSIC is passed as a parameter to the setStreamVolume() method.

value: value is amplitude level of the stream. Every stream has its different amplitude levels. getStreamMaxVolume(STREAM_TYPE) method is used to find the maximum valid amplitude level of any stream.
flag: this stackoverflow post explain all the flags of the AudioManager class.

AudioManager audioManager = (AudioManager)getSystemService(Context.AUDIO_SERVICE); audioManager.setStreamVolume(AudioManager.STREAM_MUSIC, value, flag);

Roadmap

We are working on implementing methods to fill audio buffer with specific values such that waves like Sinusoidal wave, Square Wave, Sawtooth Wave can be generated during the playback of the buffer using the AudioTrack object.

Resources

Continue ReadingImplement Wave Generation Functionality in The PSLab Android App