Open Event API Server: Implementing FAQ Types

In the Open Event Server, there was a long standing request of the users to enable the event organisers to create a FAQ section.

The API of the FAQ section was implemented subsequently. The FAQ API allowed the user to specify the following request schema

{
 "data": {
   "type": "faq",
   "relationships": {
     "event": {
       "data": {
         "type": "event",
         "id": "1"
       }
     }
   },
   "attributes": {
     "question": "Sample Question",
     "answer": "Sample Answer"
   }
 }
}

 

But, what if the user wanted to group certain questions under a specific category. There was no solution in the FAQ API for that. So a new API, FAQ-Types was created.

Why make a separate API for it?

Another question that arose while designing the FAQ-Types API was whether it was necessary to add a separate API for it or not. Consider that a type attribute was simply added to the FAQ API itself. It would mean the client would have to specify the type of the FAQ record every time a new record is being created for the same. This would mean trusting that the user will always enter the same spelling for questions falling under the same type. The user cannot be trusted on this front. Thus the separate API made sure that the types remain controlled and multiple entries for the same type are not there.

Helps in handling large number of records:

Another concern was what if there were a large number of FAQ records under the same FAQ-Type. Entering the type for each of those questions would be cumbersome for the user. The FAQ-Type would also overcome this problem

Following is the request schema for the FAQ-Types API

{
 "data": {
   "attributes": {
     "name": "abc"
   },
   "type": "faq-type",
   "relationships": {
     "event": {
       "data": {
         "id": "1",
         "type": "event"
       }
     }
   }
 }
}

 

Additionally:

  • FAQ to FAQ-type is a many to one relation.
  • A single FAQ can only belong to one Type
  • The FAQ-type relationship will be optional, if the user wants different sections, he/she can add it ,if not, it’s the user’s choice.

Related links

Deleting Meilix Github Releases

Meilix is the repository which uses build script to generate community version of lubuntu as LXQT Desktop. Meilix-Generator is the webapp which uses Meilix to generate ISO and deploy it on Meilix Github Release. Then the webapp mail the link of the ISO to the user.
Increasing number of ISO will increase the number of releases which results in dirty looking of Meilix repository. So we need to delete older releases after certain interval of time to make the repository release page looks good and decrease unwanted space.
This releases_maintainer.sh script will do this work for us.

#!/usr/bin/env bash
set -e
echo "This is a script to delete obsolete meilix iso builds by Abishek V Ashok"
echo "You have to add an authorization token to make it functional."

# jq is the JSON parser we will be using
sudo apt-get -y install jq

# Storing the response to a variable for future usage
response=`curl https://api.github.com/repos/fossasia/meilix/releases | jq '.[] | .id, .published_at'`

index=1  # when index is odd, $i contains id and when it is even $i contains published_date
delete=0 # Should we delete the release?
current_year=`date +%Y`  # Current year eg) 2001
current_month=`date +%m` # Current month eg) 2
current_day=`date +%d`   # Current date eg) 24

for i in $response; do
    if [ $((index % 2)) -eq 0 ]; then # We get the published_date of the release as $i's value here
        published_year=${i:1:4}
        published_month=${i:6:2}
        published_day=${i:9:2}

        if [ $published_year -lt $current_year ]; then
             let "delete=1"
        else
            if [ $published_month -lt $current_month ]; then
                let "delete=1"
            else
                if [ $((current_day-$published_day)) -gt 10 ]; then
                    let "delete=1"
                fi
            fi
        fi
    else # We get the id of the release as $i`s value here
        if [ $delete -eq 1 ]; then
            curl -X DELETE -H "Authorization: token $KEY" https://api.github.com/repos/fossasia/meilix/releases/$i
            let "delete=0"
        fi
    fi
    let "index+=1"
done

This code uses Github API to curl the Meilix releases. Github API is very useful in providing lots of information but here we are only concerned with the release date and time of the build.
Then we setup a condition if that satisfies then the release will automatically will get deleted.

For taking care of the authentication, a token has been uploaded to the Travis settings of Meilix of FOSSASIA.

The personal token has been generated by a user with write access to the repository with repo scope token.

This sort out the issue of having bulk of releases in the Meilix repository of FOSSASIA.

References:
Users Github API  by REST API v3
Repo Github API   by REST API v3

Open Event Server: Getting The Identity From The Expired JWT Token In Flask-JWT

The Open Event Server uses JWT based authentication, where JWT stands for JSON Web Token. JSON Web Tokens are an open industry standard RFC 7519 method for representing claims securely between two parties. [source: https://jwt.io/]

Flask-JWT is being used for the JWT-based authentication in the project. Flask-JWT makes it easy to use JWT based authentication in flask, while on its core it still used PyJWT.

To get the identity when a JWT token is present in the request’s Authentication header , the current_identity proxy of Flask-JWT can be used as follows:

@app.route('/example')
@jwt_required()
def example():
   return '%s' % current_identity

 

Note that it will only be set in the context of function decorated by jwt_required(). The problem with the current_identity proxy when using jwt_required is that the token has to be active, the identity of an expired token cannot be fetched by this function.

So why not write a function on our own to do the same. A JWT token is divided into three segments. JSON Web Tokens consist of three parts separated by dots (.), which are:

  • Header
  • Payload
  • Signature

The first step would be to get the payload, that can be done as follows:

token_second_segment = _default_request_handler().split('.')[1]

 

The payload obtained above would still be in form of JSON, it can be converted into a dict as follows:

payload = json.loads(token_second_segment.decode('base64'))

 

The identity can now be found in the payload as payload[‘identity’]. We can get the actual user from the paylaod as follows:

def jwt_identity(payload):
   """
   Jwt helper function
   :param payload:
   :return:
   """
   return User.query.get(payload['identity'])

 

Our final function will now be something like:

def get_identity():
   """
   To be used only if identity for expired tokens is required, otherwise use current_identity from flask_jwt
   :return:
   """
   token_second_segment = _default_request_handler().split('.')[1]
   missing_padding = len(token_second_segment) % 4
   payload = json.loads(token_second_segment.decode('base64'))
   user = jwt_identity(payload)
   return user

 

But after using this function for sometime, you will notice that for certain tokens, the system will raise an error saying that the JWT token is missing padding. The JWT payload is base64 encoded, and it requires the payload string to be a multiple of four. If the string is not a multiple of four, the remaining spaces can pe padded with extra =(equal to) signs. And since Python 2.7’s .decode doesn’t do that by default, we can accomplish that as follows:

missing_padding = len(token_second_segment) % 4

# ensures the string is correctly padded to be a multiple of 4
if missing_padding != 0:
   token_second_segment += b'=' * (4 - missing_padding)

 

Related links:

Creating Animations in GTK+ with Pycairo in SUSI Linux App

SUSI Linux has an assistant user interface to answer your queries. You may ask queries and SUSI answers interactively using a host of skills which range from entertainment, knowledge, media to science, technology and sports. While SUSI is finding the answer to the query, it makes sense to add some animations depicting the process. The UI for SUSI Linux app is wholly made using GTK+ 3 using PyGObject (PyGTK). Thus, I needed to find a way create animations in GTK+.

Animations in most frameworks are generally created using repetitive drawing after an interval which leads to the effect of an object’s movement. Thus, basic need was to find a way to draw a custom object in GTK+. Reading the documentation of the GTK+, I realized that this could be done with a GTKDrawingArea.

GTKDrawingArea defines an area on which application developers can do the drawing on their own. GTK itself does not provide Canvas and related object to draw the shapes. For that, we need to use Cairo graphics library. Cairo is an open source graphics library with support for multiple window systems. It can run on a variety of backends, though here, we are not concerned about them.

Cairo can be accessed in Python using Pycairo. Pycairo is a set of Python 2 & 3 bindings for the cairo graphics library. Resources for usage of Pycairo for animations with GTK+ 3 are very less thus I will try to explain it in this blog in detail. We will start by creating an Animator class extending the GTK DrawingArea class.

class Animator(Gtk.DrawingArea):
   def __init__(self, **properties):
       super().__init__(**properties)
       self.set_size_request(200, 80)
       self.connect("draw", self.do_drawing)
       GLib.timeout_add(50, self.tick)

   def tick(self):
       self.queue_draw()
       return True

   def do_drawing(self, widget, ctx):
       self.draw(ctx, self.get_allocated_width(), self.get_allocated_height())

   def draw(self, ctx, width, height):
       pass

In the above code stub, we created the Animator class extending the GTK.DrawingArea. Animator class is meant to be an abstract class for the other animators. We defined the size request of the area we want and connected the “draw” signal to the do_drawing method. On notable thing to note here is that, we have “draw” signal on GTK+ 3 while on GTK+2 we have “on_expose” signal. On the creation of the widget, draw signal is fired. In the handler do_drawing method, we receive widget and ctx. Here ctx is the Cairo context. We can perform our drawing with the help of the of Cairo context. We further call the draw method passing the context, width and height of the widget. On notable thing here is that, even though we requested for a size for the widget, allocated size might be different depending upon a number of factors. Thus, drawing must be done according to allocated area instead of the absolute area.

The draw is an abstract method. All the animators must override this method to implement custom drawing on the widget area. Lastly, we add a timeout based call to tick method. This is what drives the animation. This is done with the help of GLib.timeout_add(). Here the first argument is the time in milliseconds after which callback should be fired and second argument is the callback that should be fired. We are calling tick method in the class. It is required for the method to return True if successful for proper functioning. We call queue_draw method from within the tick method. queue_draw leads to invalidation of current area and again generates the draw signal.

Now that we know how the core of the animation will work, let us define some cool animations for the Listening phase of the application. We define the Listening Animator for the same.

class ListeningAnimator(Animator):
   def __init__(self, window, **properties):
       super().__init__(**properties)
       self.window = window
       self.tc = 0

   def draw(self, ctx, width, height):

       self.tc += 0.2
       self.tc %= 2 * math.pi

       for i in range(-4, 5):
           ctx.set_source_rgb(0.2, 0.5, 1)
           ctx.set_line_width(6)
           ctx.set_line_cap(cairo.LINE_CAP_ROUND)
           if i % 2 == 0:
               ctx.move_to(width / 2 + i * 10, height / 2 + 3 - 
                           8 * math.sin(self.tc + i))
               ctx.line_to(width / 2 + i * 10, height / 2 - 3 + 
                           8 * math.sin(self.tc + i))
           else:
               ctx.set_source_rgb(0.2, 0.7, 1)
               ctx.move_to(width / 2 + i * 10, height / 2 + 3 - 
                           8 * math.cos(self.tc - i))
               ctx.line_to(width / 2 + i * 10, height / 2 - 3 + 
                           8 * math.cos(self.tc - i))
               ctx.stroke()

In this we are drawing some lines with round cap to create an effect like below.

Since, the lines must move we are using trigonometric functions to create sinusoidal movement with some phase difference between adjacent lines. To set color of the brush, we use set_source_rgb method. We may then move the pointer to desired position, draw lines or other shapes. Then, we can either fill the shape or draw strokes using the relevant methods. The full list of methods can be accessed here in the official documentation.
After creating the widget, it can be easily added to the UI depending on the type of the container, generally by add method. You may access the full code of SUSI Linux repository to learn more about the usage in SUSI Linux App. The final result can be seen in the following video.

Resources:

Adding Tweet Streaming Feature in World Mood Tracker loklak App

The World Mood Tracker was added to loklak apps with the feature to display aggregated data from the emotion classifier of loklak server. The next step in the app was adding the feature to display the stream of Tweets from a country as they are discovered by loklak. With the addition of stream servlet in loklak, it was possible to utilise it in this app.

In this blog post, I will be discussing the steps taken while adding to introduce this feature in World Mood Tracker app.

Props for WorldMap component

The WorldMap component holds the view for the map displayed in the app. This is where API calls to classifier endpoint are made and results are displayed on the map. In order to display tweets on clicking a country, we need to define react props so that methods from higher level components can be called.

In order to enable props, we need to change the constructor for the component –

export default class WorldMap extends React.Component {
    constructor(props) {
        super(props);
        ...
    }
    ...
}

[SOURCE]

We can now pass the method from parent component to enable streaming and other components can close the stream by using props in them –

export default class WorldMoodTracker extends React.Component {
    ...
    showStream(countryName, countryCode) {
        /* Do something to enable streaming component */
        ...
    }
 
    render() {
        return (
             ...
                <WorldMap showStream={this.showStream}/>
             ...
        )
    }
}

[SOURCE]

Defining Actions on Clicking Country Map

As mentioned in an earlier blog post, World Mood Tracker uses Datamaps to visualize data on a map. In order to trigger a piece of code on clicking a country, we can use the “done” method of the Datamaps instance. This is where we use the props passed earlier –

done: function(datamap) {
    datamap.svg.selectAll('.datamaps-subunit').on('click', function (geography) {
        props.showStream(geography.properties.name, reverseCountryCode(geography.id));
    })
}

[SOURCE]

The name and ID for the country will be used to display name and make API call to stream endpoint respectively.

The StreamOverlay Component

The StreamOverlay components hold all the utilities to display the stream of Tweets from loklak. This component is used from its parent components whose state holds info about displaying this component –

export default class WorldMoodTracker extends React.Component {
    ...
    getStreamOverlay() {
        if (this.state.enabled) {
            return (<StreamOverlay
                show={true} channel={this.state.channel}
                country={this.state.country} onClose={this.onOverlayClose}/>);
        }
    }

    render() {
        return (
            ...
                {this.getStreamOverlay()}
            ...
        )
    }
}

[SOURCE]

The corresponding props passed are used to render the component and connect to the stream from loklak server.

Creating Overlay Modal

On clicking the map, an overlay is shown. To display this overlay, react-overlays is used. The Modal component offered by the packages provides a very simple interface to define the design and interface of the component, including style, onclose hook, etc.

import {Modal} from 'react-overlays';

<Modal aria-labelledby='modal-label'
    style={modalStyle}
    backdropStyle={backdropStyle}
    show={true}
    onHide={this.close}>
    <div style={dialogStyle()}>
        ...
    </div>
</Modal>

[SOURCE]

It must be noted that modalStyle and backdropStyle are React style objects.

Dialog Style

The dialog style is defined to provide some space at the top, clicking where, the overlay is closed. To do this, vertical height units are used –

const dialogStyle = function () {
    return {
        position: 'absolute',
        width: '100%',
        top: '5vh',
        height: '95vh',
        padding: 20
        ...
    };
};

[SOURCE]

Connecting to loklak Tweet Stream

loklak sends Server Sent Events to clients connected to it. To utilise this stream, we can use the natively supported EventSource object. Event stream is started with the render method of the StreamOverlay component –

render () {
    this.startEventSource(this.props.channel);
    ...
}

[SOURCE]

This channel is used to connect to twitter/country/<country-ID> channel on the stream and then this can be passed to EventStream constructor. On receiving a message, a list of Tweets is appended and later rendered in the view –

startEventSource(country) {
    let channel = 'twitter%2Fcountry%2F' + country;
    if (this.eventSource) {
        return;
    }
    this.eventSource = new EventSource(host + '/api/stream.json?channel=' + channel);
    this.eventSource.onmessage = (event) => {
        let json = JSON.parse(event.data);
        this.state.tweets.push(json);
        if (this.state.tweets.length > 250) {
            this.state.tweets.shift();
        }
        this.setState(this.state);
    };
}

[SOURCE]

The size of the list is restricted to 250 here, so when a newer Tweet comes in, the oldest one is chopped off. And thanks to fast DOM actions in React, the rendering doesn’t take much time.

Rendering Tweets

The Tweets are displayed as simple cards on which user can click to open it on Twitter in a new tab. It contains basic information about the Tweet – screen name and Tweet text. Images are not rendered as it would make no sense to load them when Tweets are coming at a high rate.

function getTweetHtml(json) {
    return (
        <div style={{padding: '5px', borderRadius: '3px', border: '1px solid black', margin: '10px'}}>
            <a href={json.link} target="_blank">
            <div style={{marginBottom: '5px'}}>
                <b>@{json['screen_name']}</b>
            </div>
            <div style={{overflowX: 'hidden'}}>{json['text']}</div>
            </a>
        </div>
    )
}

[SOURCE]

They are rendered using a simple map in the render method of StreamOverlay component –

<div className={styles.container} style={{'height': '100%', 'overflowY': 'auto',
    'overflowX': 'hidden', maxWidth: '100%'}}>
    {this.state.tweets.reverse().map(getTweetHtml)}
</div>

[SOURCE]

Closing Overlay

With the previous setup in place, we can now see Tweets from the loklak backend as they arrive. But the problem is that we will still be connected to the stream when we click-close the modal. Also, we would need to close the overlay from the parent component in order to stop rendering it.

We can use the onclose method for the Modal here –

close() {
    if (this.eventSource) {
        this.eventSource.close();
        this.eventSource = null;
    }
    this.props.onClose();
}

[SOURCE]

Here, props.onClose() disables rendering of StreamOverlay in the parent component.

Conclusion

In this blog post, I explained how the flow of props are used in the World Mood Tracker app to turn on and off the streaming in the overlay defined using react-overlays. This feature shows a basic setup for using the newly introduced stream API in loklak.

The motivation of such application was taken from emojitracker by mroth as mentioned in fossasia/labs.fossasia.org#136. The changes were proposed in fossasia/apps.loklak.org#315 by @singhpratyush (me).

The app can be accessed live at https://singhpratyush.github.io/world-mood-tracker/index.html.

Resources

Creating Dynamic Forms Using Custom-Form API in Open Event Front-end

In Open Event Front-end allows the the event creators to customise the sessions & speakers forms which are implemented on the Orga server using custom-form API. While event creation the organiser can select the forms fields which will be placed in the speaker & session forms.

In this blog we will see how we created custom forms for sessions & speakers using the custom-form API. Lets see how we did it.

Retrieving all the form fields

Each event has custom form fields which can be enabled on the sessions-speakers page, where the organiser can include/exclude the fields for speakers & session forms which are used by the organiser and speakers.

return this.modelFor('events.view').query('customForms', {});

We pass return the result of the query to the new session route where we will create a form using the forms included in the event.

Creating form using custom form API

The model returns an array of all the fields related to the event, however we need to group them according to the type of the field i.e session & speaker. We use lodash groupBy.

allFields: computed('fields', function() {
  return groupBy(this.get('fields').toArray(), field => field.get('form'));
})

For session form we run a loop allFields.session which is an array of all the fields related to session form. We check if the field is included and render the field.

{{#each allFields.session as |field|}}
  {{#if field.isIncluded}}
    <div class="field">
      <label class="{{if field.isRequired 'required'}}" for="name">{{field.name}}</label>
      {{#if (or (eq field.type 'text') (eq field.type 'email'))}}
        {{#if field.isLongText}}
          {{widgets/forms/rich-text-editor textareaId=(if field.isRequired (concat 'session_' field.fieldIdentifier '_required'))}}
        {{else}}
          {{input type=field.type id=(if field.isRequired (concat 'session_' field.fieldIdentifier '_required'))}}
        {{/if}}
      {{/if}}
    </div>
  {{/if}}
{{/each}}

We also use a unique id for all the fields for form validation. If the field is required we create a unique id as `session_fieldName_required` for which we add a validation in the session-speaker-form component. We also use different components for different types of fields eg. for a long text field we make use of the rich-text-editor component.

Thank you for reading the blog, you can check the source code for the example here.

Resources

Controlling Motors using PSLab Device

PSLab device is capable of building up a complete science lab almost anywhere. While the privilege is mostly taken by high school students and teachers to perform scientific experiments, electronic hobbyists can greatly be influenced from the device. One of the usages is to test and debug sensors and other electronic components before actually using them in their projects. In this blog it will be explained how hobbyist motors are made functional with the use of the PSLab device.

There are four types of motors generally used by hobbyists in their DIY(Do-It-Yourself) projects. They are;

  • DC Gear Motor
  • DC Brushless Motor
  • Servo Motor
  • Stepper Motor

DC motors do not require much of a control as their internal structure is simply a magnet and a shaft which was made rotatable around the magnetic field. The following image from slideshare illustrates the cross section of a motor. These motors require high currents and PSLab device as it is powered from a USB port from a PC or a mobile phone, cannot provide such high current. Hence these type of motors are not recommended to use with the device as there is a very high probability it might burn something.

In the current context, we are concerned about stepper motors and servo motors. They cannot be powered up using direct currents to them. Inside these motors, the structure is different and they require a set of controlled signals to function. The following diagram from electronics-tutorials illustrates the feedback loop inside a servo motor. A servo motor is functional using a PWM wave. Depending on the duty cycle, the rotational angle will be determined. PSLab device is capable of generating four different square waves at any duty cycle varying from 0% to 100%. This gives us freedom to acquire any angle we desire from a servo motor. The experiment “Servo Motors” implement the following method where it accepts four angles.

public void servo4(double angle1, double angle2, double angle3, double angle4)

The experiment supports control of four different servo motors at independant angles. Most of the servos available in the market support only 180 degree rotation where some servos can rotate indefinitely. In such a case, the servo will rotate one cycle and reach its initial position.

The last type of motor is stepper motor. As the name says it, this motor can produce steps. Inside of the motor, there are four coils and and five wires coming out of the motor body connecting these coils. The illustration from Wikipedia shows how four steps are acquired by powering up the respective coil in order. This powering up process needs to be controlled and hard to do manually. Using PSLab device experiment “Stepper Motor”, a user can acquire any number of steps just by entering the step value in the text box. The implementation consists of a set of method calls;

scienceLab.stepForward(steps, 100);

scienceLab.stepBackward(steps, 100);

A delay of 100 milliseconds is provided so that there is enough time to produce a step. Otherwise the shaft will not experience enough resultant force to move and will remain in the same position.

These two experiments are possible with PSLab because the amount of current drawn is quite small which can be delivered through a general USB port. It is worth mentioning that as industry grade servo and stepper motors may draw high current as they were built to interact with heavy loads, they are not suitable for this type of experiments.

Resources:

Creating GUI for configuring SUSI Linux Settings

SUSI Linux app provides access to SUSI on Linux distributions on desktop as well as hardware devices like Raspberry Pi. The settings for SUSI Linux are controlled with the use of a config.json file. You may edit the file manually, but to provide safe configurations, we have a config generator script. You may run the script to configure settings like TTS Engine, STT Engine, authentication, choice about the hotword engine etc. Generally, it is easier to configure application settings through a GUI. Thus, we added a GUI for it using PyGTK and Glade.

Glade is a GUI designer for GNOME based Linux systems. I wrote a blog about how to create user interfaces in Glade and access it from Python code in SUSI Linux. Now, for creating UI for Configuration screen, we need to choose an ideal layout. Glade provides various choices like BoxLayout, GridLayout, FlowBox, ListBox , Notebook etc. Since, we need to display only basic settings options, we select the BoxLayout for this purpose.

BoxLayout as the name suggests, forms a box like arrangement for widgets. You can arrange the widgets in either Landscape or Horizontal Layout. We select Application Window as a top-level container and add a BoxLayout container in it. Now, in each box of the BoxLayout, we need to add the widgets like ComboBox and Switch for user’s choice and a Label. This can be done by using a horizontal BoxLayout with corresponding widgets. After arranging the UI in above described manner, we have a GUI like below.

If you see the current window in the preview now, you will find that the ComboBox do not have any items. We need to define items in the ComboBox using a GTKListStore. You may refer to this video tutorial to see how this can be done.

Now, when we see the preview, our GUI is fully functional. We have options for Speech Recognition Service, Text to Speech Service in ComboBox. Other simple settings are available as switches.

Now, we need to add functionality to our UI. We want our code to be modular and structured, therefore, we declare a ConfigurationWindow class. Though the ideal way to handle such cases is inheriting from the Gtk.Window class, but reading the documentation of PyGTK+ 3, I could not find a way to do this for windows created through Glade. Thus, we will use composition for storing the window object. We add window and other widgets present in the UI as properties of ConfigurationWindow class like this.

class ConfigurationWindow:
   def __init__(self) -> None:
       super().__init__()
       builder = Gtk.Builder()
       builder.add_from_file(os.path.join("glade_files/configure.glade"))

       self.window = builder.get_object("configuration_window")
       self.stt_combobox = builder.get_object("stt_combobox")
       self.tts_combobox = builder.get_object("tts_combobox")
       self.auth_switch = builder.get_object("auth_switch")
       self.snowboy_switch = builder.get_object("snowboy_switch")
       self.wake_button_switch = builder.get_object("wake_button_switch")

Now, we need to connect the Signals from our configuration window to the Handler. We declare the Handler as a nested class in the ConfigurationWindow class because its scope of usage is inside the ConfigurationWindow object. Then you may connect signals to an object of the Handler class.

builder.connect_signals(ConfigurationWindow.Handler(self))

Since we may need to modify the state of the widgets, we hold a reference of the parent ConfigurationWindow object in the Handler and pass the self as a parameter to the Handler. You may read more about using the handlers in my previous blog.

In the Handler, we connect to the config.json file and change the parameters of the the file based on the user inputs on the GUI. We handle it for the Text to Speech selection comboBox in the following manner. We also declare two addition Dialogs for handling the input of credentials by the users for the Watson and Bing services.

def on_stt_combobox_changed(self, combo: Gtk.ComboBox):
   selection = combo.get_active()

   if selection == 0:
       config['default_stt'] = 'google'

   elif selection == 1:
       credential_dialog = WatsonCredentialsDialog(self.config_window.window)
       response = credential_dialog.run()

       if response == Gtk.ResponseType.OK:
           username = credential_dialog.username_field.get_text()
           password = credential_dialog.password_field.get_text()
           config['default_stt'] = 'watson'
           config['watson_stt_config']['username'] = username
           config['watson_stt_config']['password'] = password
       else:
           self.config_window.init_stt_combobox()

       credential_dialog.destroy()

   elif selection == 2:
       credential_dialog = BingCredentialDialog(self.config_window.window)
       response = credential_dialog.run()

       if response == Gtk.ResponseType.OK:
           api_key = credential_dialog.api_key_field.get_text()
           config['default_stt'] = 'bing'
           config['bing_speech_api_key']['username'] = api_key
       else:
           self.config_window.init_stt_combobox()

       credential_dialog.destroy()

Now, we declare two more methods to show and exit the Window.

def show_window(self):
   self.window.show_all()
   Gtk.main()

def exit_window(self):
   self.window.destroy()
   Gtk.main_quit()

Now, we may use the ConfigurationWindow class object anywhere from our code. This modularized approach is better when you need to manage multiple windows as you can just declare the Window of a particular type and show it whenever need in your code.

Resources

  • Glade usage Youtube tutorial: https://www.youtube.com/watch?v=vOGK3TveDDk
  • Creating GUI using PyGTK for SUSI Linux: https://blog.fossasia.org/making-gui-for-susi-linux-with-pygtk/
  • PyGObject Documentation: http://pygobject.readthedocs.io/en/latest/getting_started.html

Filling Audio Buffer to Generate Waves in the PSLab Android App

The PSLab Android App works as an oscilloscope and a wave generator using the audio jack of the Android device. The implementation of the oscilloscope in the Android device using the in-built mic has been discussed in the blog post “Using the Audio Jack to make an Oscilloscope in the PSLab Android App” and the same has been discussed in the context of wave generator in the blog post “Implement Wave Generation Functionality in the PSLab Android App”. This post is a continuation of the post related to the implementation of wave generation functionality in the PSLab Android App. In this post, the subject matter of discussion is the way to fill the audio buffer so that the resulting wave generated is either a Sine Wave, a Square Wave or a Sawtooth Wave. The resultant audio buffer would be played using the AudioTrack API of Android to generate the corresponding wave. The waves we are trying to generate are periodic waves.

Periodic Wave: A wave whose displacement has a periodic variation with respect to time or distance, or both.

Thus, the problem reduces to generating a pulse which will constitute a single time period of the wave. Suppose we want to generate a sine wave; if we generate a continuous stream of pulses as illustrated in the image below, we would get a continuous sine wave. This is the main concept that we shall try to implement using code.

Initialise AudioTrack Object

AudioTrack object is initialised using the following parameters:

  • STREAM TYPE: Type of stream like STREAM_SYSTEM, STREAM_MUSIC, STREAM_RING, etc. For wave generation purposes we are using stream music. Every stream has its own maximum and minimum volume level.  
  • SAMPLING RATE: It is the rate at which the source samples the audio signal.
  • BUFFER SIZE IN BYTES: Total size of the internal buffer in bytes from where the audio data is read for playback.
  • MODES: There are two modes-
    • MODE_STATIC: Audio data is transferred from Java to the native layer only once before the audio starts playing.
    • MODE_STREAM: Audio data is streamed from Java to the native layer as audio is being played.

getMinBufferSize() returns the estimated minimum buffer size required for an AudioTrack object to be created in the MODE_STREAM mode.

minTrackBufferSize = AudioTrack.getMinBufferSize(SAMPLING_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
audioTrack = new AudioTrack(
       AudioManager.STREAM_MUSIC,
       SAMPLING_RATE,
       AudioFormat.CHANNEL_OUT_MONO,
       AudioFormat.ENCODING_PCM_16BIT,
       minTrackBufferSize,
       AudioTrack.MODE_STREAM);

Fill Audio Buffer to Generate Sine Wave

Depending on the values in the audio buffer, the wave is generated by the AudioTrack object. Therefore, to generate a specific kind of wave, we need to fill the audio buffer with some specific values. The values are governed by the wave equation of the signal that we want to generate.

public short[] createBuffer(int frequency) {
   short[] buffer = new short[minTrackBufferSize];
   double f = frequency;
   double q = 0;
   double level = 16384;
   final double K = 2.0 * Math.PI / SAMPLING_RATE;

   for (int i = 0; i < minTrackBufferSize; i++) {
         f += (frequency - f) / 4096.0;
         q += (q < Math.PI) ? f * K : (f * K) - (2.0 * Math.PI);
         buffer[i] = (short) Math.round(Math.sin(q));
   }
   return buffer;
}

Fill Audio Buffer to Generate Square Wave

To generate a square wave, let’s assume the time period to be t units. So, we need the amplitude to be equal to A for t/2 units and -A for the next t/2 units. Repeating this pulse continuously, we will get a square wave.

buffer[i] = (short) ((q > 0.0) ? 1 : -1);

Fill Audio Buffer to Generate Sawtooth Wave

Ramp signals increases linearly with time. A Ramp pulse has been illustrated in the image below:

We need repeated ramp pulses to generate a continuous sawtooth wave.

buffer[i] = (short) Math.round((q / Math.PI));

Finally, when the audio buffer is generated, write it to the audio sink for playback using write() method exposed by the AudioTrack object.

audioTrack.write(buffer, 0, buffer.length);

Resources

Making Skill Display Cards Identical in SUSI.AI Skill CMS

SUSI.AI Skill CMS shows all the skills of SUSI.AI. The cards used to display all the skills follow flexbox structure and adjust their height according to content. This lead to cards of different sizes and this needed to be fixed. This needed to fix as the cards looked like this:

The cards display following things:

  • Image related to skill
  • An example query related to skill in double quotes
  • Name of skill
  • Short description of skill

Now to get all these, we make an ajax call to the following endpoint:

http://api.susi.ai/cms/getSkillList.json?model='+ this.state.modelValue + '&group=' + this.state.groupValue + '&language=' + this.state.languageValue

Explanation:

  • this.state.modelValue: This is the model of the skill, stored in state of component
  • this.state.groupValue: This represents the group to which skill belongs to. For example Knowledge, Communication, Music, and Audio, etc.
  • this.state.languageValue: This represents the ISO language code of language in which skill is defined

Now the response is in JSONP format and it looks like:

Now we parse the response to get the information needed and return the following Card(Material UI Component):

<Link key={el}
     to={{
        pathname: '/' + self.state.groupValue + '/' + el + '/' + self.state.languageValue,
            state: {
                        url: url,
                        element: el,
                        name: el,
                        modelValue: self.state.modelValue,
                        groupValue: self.state.groupValue,
                        languageValue: self.state.languageValue,
                       }
           }}>
           <Card style={styles.row} key={el}>
                <div style={styles.right} key={el}>
                       {image ? <div style={styles.imageContainer}>
                        <img alt={skill_name}
                          src={image}
                          style={styles.image} />
                          </div> :
                         <CircleImage name={el} size='48' />}
                             <div style={styles.titleStyle}>{examples}</div>
                             </div>
                             <div style={styles.details}>
                                 <h3 style={styles.name}>{skill_name}</h3>
                                 <p style={styles.description}>{description}</p>
                             </div>
         </Card>
</Link>

Now the information that leads to non-uniformity in these cards is the skill description. Now to solve this we decided to put a certain limit to the description length and if that limit is crossed, then we will show the following dots: “”. The height and width of the cards were fixed according to screen size and we modified the description as follows:

if (skill.descriptions) {
      if (skill.descriptions.length > 120) {
          description = skill.descriptions.substring(0, 119) + '...';
      }
      else {
          description = skill.descriptions;
      }
}

This way no content was being cut and all the skill cards looks identical:

Resources: