Basics behind BJT and FET experiments in PSLab

A high school student in his curriculum; will come across certain electronics and electrical experiments. One of them related to semiconductor devices such as Bipolar Junction Transistors (BJTs) and Field Effect Transistors (FETs). PSLab device is capable of function as a waveform generator, voltage and current source, oscilloscope and multimeter. Using these functionalities one can design an experiment. This blog post brings out the basics one should know about the experiment and the PSLab device to program an experiment in the saved experiments section.

Channels and Sources in the PSLab Device

The PSLab device has three pins dedicated to function as programmable voltage sources (PVS) and one pin for programmable current source (PCS).

Programmable Voltage Sources can generate voltages as follows;

  • PV1 →  -5V ~ +5V
  • PV2 → -3.3V ~ +3.3V
  • PV3 → 0 ~ +3.3V

Programmable Current Source (PCS) can generate current as follows;

  • PCS → 0 ~ 3.3mA

The device has 4 channel oscilloscope out of those CH1, CH2 and CH3 pins are useful in experiments of the current context type.

About BJTs and FETs

Every semiconductor device is made of Silicon(Si). Some are made of Germanium(Ge) but they are not widely used. Silicon material has a potential barrier of 0.7 V among P type and N type sections of a semiconductor device. This voltage value is really important in an experiment as in some practicals such as “BJT Amplifier”, there is no use of a voltage value setting below this value. So the experiment needs to be programmed to have 0.7V as the minimum voltage for Base terminal.

Basic BJT experiments

BJTs have three pins. Collector, Emitter and Base. Current to the Base pin will control the flow of electrons from Emitter to Collector creating a voltage difference between Collector and Emitter pins. This scenario can be taken down to three types as;

  • Input Characteristics → Relationship between Emitter current to VBE(Base to Emitter)
  • Output Characteristics → Relationship between IC(Collector) to VCB(Collector to Base)
  • Transfer Characteristics → Relationship between IC(Collector) to IE(Emitter)

Input Characteristics

Output Characteristics

Transfer Characteristics

     

Basic FET experiments

FETs have three pins. Drain, Source and Gate. Voltage to Gate terminal will control the electron flow from either direction from or to Source and Drain. This scenario results in two types of experiments;

  • Output Characteristics → Drain current to Drain to Source voltage difference
  • Transfer Characteristics → Gate to Source voltage to Drain current
Output Characteristics Transfer Characteristics

Using existing methods in PSLab android repository

Current implementation of the android application consists of all the methods required to read voltages and currents from the relevant pins and fetch waveforms from the channel pins and output voltages from PVS pins.

ScienceLab.java class – This class implements all the methods required for any kind of an experiment. The methods that will be useful in designing BJT and FET related experiments are;

Set Voltages

public void setPV1(float value);

public void setPV2(float value);

public void setPV3(float value);

Set Currents

public void setPCS(float value);

Read Voltages

public double getVoltage(String channelName, Integer sample);

Read Currents

To read current there is no direct way implemented. The current flow between two nodes can be calculated using the PVS pin value and the voltage value read from the channel pins. It uses Ohm’s law to calculate the value using the known resistance between two nodes.

In the following schematic; the collector current can be calculated using known PV1 value and the measured CH1 value as follows;

IC = (PV1 – CH1) / 1000

This is how it is actually implemented in the existing experiments.

If one needs to implement a new experiment of any kind, these are the basics need to know. There can be so many new experiments implemented using these basics. Some of them could be;

  • Effect of Temperature coefficient in Collector current
  • The influence in β factor in Collector current

Resources:

Continue ReadingBasics behind BJT and FET experiments in PSLab

UI Espresso Test Cases for Phimpme Android

Now we are heading toward a release of Phimpme soon, So we are increasing the code coverage by writing test cases for our app. What is a Test Case? Test cases are the script against which we run our code to test the features implementation. It is basically contains the output, flow and features steps of the app. To release app on multiple platform, it is highly recommended to test the app on test cases.

For example, Let’s consider if we are developing an app which has one button. So first we write a UI test case which checks whether a button displayed on the screen or not? And in response to that it show the pass and fail of a test case.

Steps to add a UI test case using Espresso

Espresso testing framework provides APIs to simulate user interactions. It has a concise API. Even, now in new Version of Android Studio, there is a feature to record Espresso Test cases. I’ll show you how to use Recorder to write test cases in below steps.

  • Setup Project Directory

Android Instrumentation tests must be placed in androidTest directory. If it is not there create a directory in app/src/androidTest/java…

  • Write Test Case

So firstly, I am writing a very simple test case, which checks whether the three Bottom navigation view items are displayed or not?

Espresso Testing framework has basically three components:

ViewMatchers

Which helps to find the correct view on which some actions can be performed E.g. onView(withId(R.id.navigation_accounts). Here I am taking the view of accounts item in Bottom Navigation View.

ViewActions

It allows to perform actions on the view we get earlier. E.g. Very basic operation used is click()

ViewAssertions

It allows to assert the current state of the view E.g. isDisplayed() is an assertion on the view we get. So a basic architecture of an Espresso Test case is

onView(ViewMatcher)       
 .perform(ViewAction)     
   .check(ViewAssertion);

We can also Use Hamcrest framework which provide extra features of checking conditions in the code.

Setup Espresso in Code

Add this in your application level build.gradle

// Android Testing Support Library's runner and rules
androidTestCompile "com.android.support.test:runner:$rootProject.ext.runnerVersion"
androidTestCompile "com.android.support.test:rules:$rootProject.ext.rulesVersion"

// Espresso UI Testing dependencies.
androidTestCompile "com.android.support.test.espresso:espresso-core:$rootProject.ext.espressoVersion"
androidTestCompile "com.android.support.test.espresso:espresso-contrib:$rootProject.ext.espressoVersion"
  • Use recorder to write the Test Case

New recorder feature is great, if you want to set up everything quickly. Go to the Run → Record Espresso Test in Android Studio.

It dumps the current User Interface hierarchy and provide the feature to assert that.

You can edit the assertions by selecting the element and apply the assertion on it.

Save and Run the test cases by right click on Name of the class. Run ‘Test Case name’

Console will show the progress of Test case. Like here it is showing passed, it means it get all the view hierarchy which is required by the Test Case.

Resources

Continue ReadingUI Espresso Test Cases for Phimpme Android

Implementing Sort By Date Feature In Susper

 

Susper has been given ‘Sort By Date’ feature which provides the user with latest results with the latest date. This feature enhances the search experience and helps users to find desired results more accurately. The sorting of results date wise is done by yacy backend which uses Apache Solr technology.

The idea was to create a ‘Sort By Date’ feature similar to the market leader. For example, if a user searches for keyword ‘Jaipur’ then results appear to be like this:

If a user wishes to get latest results, they can use ‘Sort By Date’ feature provided under ‘Tools’.

The above screenshot shows the sorted results.

You may however notice that results are not arranged year wise. Currently, the backend work for this is being going on Yacy and soon will be implemented on the frontend as well once backend provide us this feature.

Under ‘Tools’ we created an option for ‘Sort By Date’ simply using <li> tag.

<ul class=dropdownmenu>
  <li (click)=filterByDate()>Sort By Date</li>
</ul>

When clicked, it calls filterByDate() function to perform the following task:

filterByDate() {
  let urldata = Object.assign({}, this.searchdata);
  urldata.query = urldata.query.replace(/date, “”);
  this.store.dispatch(new queryactions.QueryServerAction(urldata));
}
Earlier we were using ‘last_modified desc’ attribute provided by Solr for sorting out dates in descending order. In June 2017, this feature was deprecated with a new update of Solr. We are using /date attribute in query for sorting out results which is being provided by Solr.

 

Continue ReadingImplementing Sort By Date Feature In Susper

Implementation of Text-To-Speech Feature In Susper

Susper has been given a voice search feature through which it provides the user a better experience of search. We introduced to enhance the speech recognition by adding Speech Synthesis or Text-To-Speech feature. The speech synthesis feature should only work when a voice search is attempted.

The idea was to create speech synthesis similar to market leader. Here is the link to YouTube video showing the demo of the feature: Video link

In the video, it will show demo :

  • If a manual search is used then the feature should not work.
  • If voice search is used then the feature should work.

For implementing this feature, we used Speech Synthesis API which is provided with Google Chrome browser 33 and above versions.

window.speechSynthesis.speak(‘Hello world!’); can be used to check whether the browser supports this feature or not.

First, we created an interface:

interface IWindow extends Window {
  SpeechSynthesisUtterance: any;
  speechSynthesis: any;
};

 

Then under @Injectable we created a class for the SpeechSynthesisService.

export class SpeechSynthesisService {
  utterence: any;  constructor(private zone: NgZone) { }  speak(text: string): void {
  const { SpeechSynthesisUtterance }: IWindow = <IWindow>window;
  const { speechSynthesis }: IWindow = <IWindow>window;  this.utterence = new SpeechSynthesisUtterance();
  this.utterence.text = text; // utters text
  this.utterence.lang = ‘en-US’; // default language
  this.utterence.volume = 1; // it can be set between 0 and 1
  this.utterence.rate = 1; // it can be set between 0 and 1
  this.utterence.pitch = 1; // it can be set between 0 and 1  (window as any).speechSynthesis.speak(this.utterence);
}// to pause the queue of utterence
pause(): void {
  const { speechSynthesis }: IWindow = <IWindow>window;
const { SpeechSynthesisUtterance }: IWindow = <IWindow>window;
  this.utterence = new SpeechSynthesisUtterance();
  (window as any).speechSynthesis.pause();
 }
}

 

The above code will implement the feature Text-To-Speech.

The source code for the implementation can be found here: https://github.com/fossasia/susper.com/blob/master/src/app/speech-synthesis.service.ts

We call speech synthesis only when voice search mode is activated. Here we used redux to check whether the mode is ‘speech’ or not. When the mode is ‘speech’ then it should utter the description inside the infobox.

We did the following changes in infobox.component.ts:

import { SpeechSynthesisService } from ../speechsynthesis.service;

speechMode: any;

constructor(private synthesis: SpeechSynthesisService) { }

this.query$ = store.select(fromRoot.getwholequery);
this.query$.subscribe(query => {
  this.keyword = query.query;
  this.speechMode = query.mode;
});

 

And we added a conditional statement to check whether mode is ‘speech’ or not.

// conditional statement
// to check if mode is ‘speech’ or not
if (this.speechMode === speech) {
  this.startSpeaking(this.results[0].description);
}
startSpeaking(description) {
  this.synthesis.speak(description);
  this.synthesis.pause();
}

 

 

 

Continue ReadingImplementation of Text-To-Speech Feature In Susper

API Error Handling in the Open Event Organizer Android App

Open Event Organizer is an Android App for Organizers and Entry Managers. Open Event API server acts as a backend for this App. So basically the App makes data requests to the API and in return, the API performs required actions on the data and sends back the response to the App which is used to display relevant info to the user and to update the App’s local database. The error responses returned by the API need to parse and show the understandable error message to the user.

The App uses Retrofit+OkHttp for making network requests to the API. Hence the request method returns a Throwable in the case of an error in the action. The Throwable contains a string message which can be get using the method named getMessage. But the message is not understandable by the normal user. Open Event Organizer App uses ErrorUtils class for this work. The class has a method which takes a Throwable as a parameter and returns a good error message which is easier to understand to the user.

Relevant code:

public final class ErrorUtils {

   public static final int BAD_REQUEST = 400;
   public static final int UNAUTHORIZED = 401;
   public static final int FORBIDDEN = 403;
   public static final int NOT_FOUND = 404;
   public static final int METHOD_NOT_ALLOWED = 405;
   public static final int REQUEST_TIMEOUT = 408;

   private ErrorUtils() {
       // Never Called
   }

   public static String getMessage(Throwable throwable) {
       if (throwable instanceof HttpException) {
           switch (((HttpException) throwable).code()) {
               case BAD_REQUEST:
                   return "Something went wrong! Please check any empty field if a form.";
               case UNAUTHORIZED:
                   return "Invalid Credentials! Please check your credentials.";
               case FORBIDDEN:
                   return "Sorry, you are not authorized to make this request.";
               case NOT_FOUND:
                   return "Sorry, we couldn't find what you were looking for.";
               case METHOD_NOT_ALLOWED:
                   return "Sorry, this request is not allowed.";
               case REQUEST_TIMEOUT:
                   return "Sorry, request timeout. Please retry after some time.";
               default:
                   return throwable.getMessage();
           }
       }
       return throwable.getMessage();
   }
}

ErrorUtils.java
app/src/main/java/org/fossasia/openevent/app/common/utils/core/ErrorUtils.java

All the error codes are stored as static final fields. It is always a good practice to follow a making the constructor private for a utility class to make sure the class is never initialized anywhere in the app. The method getMessage takes a Throwable and checks if it is an instance of the HttpException to get an HTTP error code. Actually, there are two exceptions – HttpException and IOException. The prior one is returned from the server. In the method by using the error codes, relevant good error messages are returned which are shown to the user in a snackbar layout.

It is always a good practice to show a more understandable user-friendly error messages than simply the default ones which are not clear to the normal user.

Links:
1. List of the HTTP Client Error Codes – Wikipedia Link
2. Class Throwable javadoc

Continue ReadingAPI Error Handling in the Open Event Organizer Android App

Parsing SUSI.AI Blog Feed

Our SUSI.AI web chat is improving every day.Recently we decided to add a blog page to our SUSI.AI web chat to show all the latest blogs related to SUSI.AI. These blogs are written by our developer team. We have the following feed from which we need to parse the information of blogs and display:

http://blog.fossasia.org/tag/susi-ai/feed/

This feed is in RSS (Really Simple Syndication) format. We decided to use Google feed API service to parse this RSS content, but that service is now deprecated. Then we decided to convert this RSS into JSON format and then we will parse the information.
We have used the rss2json API for converting our RSS content into JSON. We make an ajax call to this API for fetching the JSON content:

$.ajax({
        url: 'https://api.rss2json.com/v1/api.json',
        method: 'GET',
        dataType: 'json',
        data: {
            'rss_url': 'http://blog.fossasia.org/tag/susi-ai/feed/',
            'api_key': api_key: '0000000000000000000000000000000000000000', // put your api key here,
            'count': 50
        }
        }).done(function (response) {
            if(response.status !== 'ok'){ throw response.message; }
            this.setState({ posts: response.items, postRendered: true});
        }.bind(this));

Explanation:

  • URL: This is base URL of API to which we are making calls in order to fetch JSON data.
  • rss_url: This is the URL of RSS feed which needs to be converted into JSON format.
  • api_key: This is the key, which can be generated after making an account on the website
  • count: Count of feed items to return, the default is 20.

The converted JSON response looks like:

This can be checked here.

Now we have used cards of material-ui to show the content of this JSON response. On the success of our ajax call, we update the array(named as posts) present in the initial state of our component with response.items( an array containing information of blogs).

We map through each element in an array named posts and return the corresponding card, containing relevant information in it. We parsed the following information from JSON:

  • Name of the author: posts.author
  • Title of the blog: posts.title
  • Link to blog on WordPress: posts.link

Publish date of the blog:
The publish date is in this format: “2017-08-05 09:05:27” (example), we need to format this date. We used following code to do that:

let date = posts.pubDate.split(' ');
let d = new Date(date[0]);
dateFormat(d, 'dddd, mmmm dS, yyyy') // dateFormat is an npm package

This converts “2017-08-05 09:05:27” to “Saturday, August 5th, 2017”.

Description and Featured Images of the blog:
We needed to show a short description of the blog and a featured image. There is an element in our posts array with name description that contains HTML, starting from featured image and followed by a short description. We needed to convert this HTML into simple text for using this. I used htmlToText npm package for this purpose:
let description = htmlToText.fromString(posts.description).split(‘…’);
description variable now contains simple text. A simple example :

[https://blog.fossasia.org/wp-content/uploads/2017/08/image2-768×442.png] Our SUSI.AI Web Chat has many static pages like Overview, Devices, Team and Support.We have separate CSS files for each component. Recently, we faced a problem regarding design pattern where CSS files of one component were affecting another component. This blog is all about solving this issue and we take an example of distortion

Now, this text contains both link for featured image and our short description text. For getting the link for featured image, I have used regex(Regular Expression) in the following code and saved the link in the variable image and for short description I have used the split function :

let text = description[0].split(']');
let image = susi // temporary image for initialisation
let regExp = /\[(.*?)\]/;
let imageUrl = regExp.exec(description[0]);
if(imageUrl) {
   image = imageUrl[1]
}

After successfully parsing this information from JSON, we can have cards with details and card looks like this:

As we need all this to be rendered when the component mounts, so we put our ajax call inside componentDidMount() function of our react component.

Resources:

Continue ReadingParsing SUSI.AI Blog Feed

Using Mosquitto as a Message Broker for MQTT in loklak Server

In loklak server, messages are collected from various sources and indexed using Elasticsearch. To know when a message of interest arrives, users can poll the search endpoint. But this method would require a lot of HTTP requests, most of them being redundant. Also, if a user would like to collect messages for a particular topic, he would need to make a lot of requests over a period of time to get enough data.

For GSoC 2017, my proposal was to introduce stream API in the loklak server so that we could save ourselves from making too many requests and also add many use cases.

Mosquitto is Eclipse’s project which acts as a message broker for the popular MQTT protocol. MQTT, based on the pub-sub model, is a lightweight and IOT friendly protocol. In this blog post, I will discuss the basic setup of Mosquitto in the loklak server.

Installation and Dependency for Mosquitto

The installation process of Mosquitto is very simple. For Ubuntu, it is available from the pre installed PPAs –

sudo apt-get install mosquitto

Once the message broker is up and running, we can use the clients to connect to it and publish/subscribe to channels. To add MQTT client as a project dependency, we can introduce following line in Gradle dependencies file –

compile group: 'net.sf.xenqtt', name: 'xenqtt', version: '0.9.5'

[SOURCE]

After this, we can use the client libraries in the server code base.

The MQTTPublisher Class

The MQTTPublisher class in loklak would provide an interface to perform basic operations in MQTT. The implementation uses AsyncClientListener to connect to Mosquitto broker –

AsyncClientListener listener = new AsyncClientListener() {
    // Override methods according to needs
};

[SOURCE]

The publish method for the class can be used by other components of the project to publish messages on the desired channel –

public void publish(String channel, String message) {
    this.mqttClient.publish(new PublishMessage(channel, QoS.AT_LEAST_ONCE, message));
}

[SOURCE]

We also have methods which allow publishing of multiple messages to multiple channels in order to increase the functionality of the class.

Starting Publisher with Server

The flags which signal using of streaming service in loklak are located in conf/config.properties. These configurations are referred while initializing the Data Access Object and an MQTTPublisher is created if needed –

String mqttAddress = getConfig("stream.mqtt.address", "tcp://127.0.0.1:1883");
streamEnabled = getConfig("stream.enabled", false);
if (streamEnabled) {
    mqttPublisher = new MQTTPublisher(mqttAddress);
}

[SOURCE]

The mqttPublisher can now be used by other components of loklak to publish messages to the channel they want.

Adding Mosquitto to Kubernetes

Since loklak has also a nice Kubernetes setup, it was very simple to introduce a new deployment for Mosquitto to it.

Changes in Dockerfile

The Dockerfile for master deployment has to be modified to discover Mosquitto broker in the Kubernetes cluster. For this purpose, corresponding flags in config.properties have to be changed to ensure that things work fine –

sed -i.bak 's/^\(stream.enabled\).*/\1=true/' conf/config.properties && \
sed -i.bak 's/^\(stream.mqtt.address\).*/\1=mosquitto.mqtt:1883/' conf/config.properties && \

[SOURCE]

The Mosquitto broker would be available at mosquitto.mqtt:1883 because of the service that is created for it (explained in later section).

Mosquitto Deployment

The Docker image used in Kubernetes deployment of Mosquitto is taken from toke/docker-kubernetes. Two ports are exposed for the cluster but no volumes are needed –

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mosquitto
  namespace: mqtt
spec:
  ...
  template:
    ...
    spec:
      containers:
      - name: mosquitto
        image: toke/mosquitto
        ports:
        - containerPort: 9001
        - containerPort: 8883

[SOURCE]

Exposing Mosquitto to the Cluster

Now that we have the deployment running, we need to expose the required ports to the cluster so that other components may use it. The port 9001 appears as port 80 for the service and 1883 is also exposed –

apiVersion: v1
kind: Service
metadata:
  name: mosquitto
  namespace: mqtt
  ...
spec:
  ...
  ports:
  - name: mosquitto
    port: 1883
  - name: mosquitto-web
    port: 80
    targetPort: 9001

[SOURCE]

After creating the service using this configuration, we will be able to connect our clients to Mosquitto at address mosquitto.mqtt:1883.

Conclusion

In this blog post, I discussed the process of adding Mosquitto to the loklak server project. This is the first step towards introducing the stream API for messages collected in loklak.

These changes were introduced in pull requests loklak/loklak_server#1393 and loklak/loklak_server#1398 by @singhpratyush (me).

Resources

Continue ReadingUsing Mosquitto as a Message Broker for MQTT in loklak Server

Implementing Speakers Call API in Open Event Frontend

This article will illustrate how to display the speakers call details on the call for speakers page in the Open Event Frontend project using the Open Event Orga API. The API endpoints which will be mainly focussing on for fetching the speaker call details are:

GET /v1/speakers-calls/{speakers_call_id}

In the case of Open Event, the speakers are asked to submit their proposal beforehand if they are interested in giving some talk. For the same purpose, we have a section on the event’s website called as Call for Speakers on the event’s public page where the details about the speakers call are present along with the button Submit Proposal which redirects to the link where they can upload the proposal if the speakers call is open. Since the speakers call page is present on the event’s public page so the route which will be concerned with will be public/index route and its subroute public/index/cfs in the application. As the call for speakers details are nested within the events model so we need to first fetch the event and then from there we need to fetch the speaker-calls detail from the model.

The code to fetch the event model looks like this:

model(params) {
return this.store.findRecord('event', params.event_id, { include: 'social-links' });
}

The above model takes care of fetching all the data related to the event but, we can see that speakers call is not included as the parameter. The main reason behind this is the fact that the speakers is not required on each of the public route, rather it is required only for the subroute public/index/cfs route. Let’s see how the code for the speaker-call modal work to fetch the speaker calls detail from the above event model.  

model() {
    const eventDetails = this.modelFor('public');
    return RSVP.hash({
      event        : eventDetails,
      speakersCall : eventDetails.get('speakersCall')
    });
}

In the above code, we made the use of this.modelFor(‘public’) to make the use of the event data fetched in the model of the public route, eliminating the separate API call for the getting the event details in the speaker call route. Next, using the ember’s get method we are fetching the speakers call data from the eventDetails and placing it inside the speakersCall JSON object for using it lately to display speakers call details in public/index subroute.

Until now, we have fetched event details and speakers call details in speakers call subroute but we need to display this on the index page of the sub route. So we will pass the model from file cfs.hbs to call-for-speakers.hbs the code for which looks like this:

{{public/call-for-speakers speakersCall=model.speakersCall}}  

The trickiest part in implementing the speakers call is to check whether the speakers call is open or closed. The code which checks whether the call for speaker has to be open or closed is:

isOpen: computed('startsAt', 'endsAt', function() {
     return moment().isAfter(this.get('startsAt')) && moment().isBefore(this.get('endsAt'));
})

In the above-computed property isOpen of speakers-call model, we are passing the starting time and the ending time of the speakers call. We are then comparing if the starting time is after the current time and the current time is before the ending time than if both conditions satisfy to be true then the speakers call is open else it will be closed.  

Now, we need a template file where we will define how the user interface for call-for-speakers based on the above property, isOpen. The code for displaying UI based on its open or closed status is

  {{#if speakersCall.isOpen}}
    <a class="ui basic green label">{{t 'Open'}} </a>
    <div class="sub header">
      {{t 'Call for Speakers Open until'}} {{moment-format speakersCall.endsAt 'ddd, MMM DD HH:mm A'}}
    </div>
  {{else}}
    <a class="ui basic red label">{{t 'Closed'}}</a>
  {{/if}}

In the above code, we are checking is the speakersCall is open then we show a label open and display the date until which speakers call is opened using the moment helper in the format “ddd, MMM DD HH:mm A” else we show a label closed. The UI for the above code looks like this.

Fig. 1: The heading of speakers call page when the call for speakers is open

The complete UI of the page looks like this.

Fig. 2: The user interface for the speakers call page

The entire code for implementing the speakers call API can be seen here.

To conclude, this is how we efficiently fetched the speakers call details using the Open-Event-Orga speakers call API, ensuring that there is no unnecessary API call to fetch the data.  

Resources:

Continue ReadingImplementing Speakers Call API in Open Event Frontend

Implementing Rich Responses in SUSI Action on Google Assistant

Google Assistant is a personal assistant. This tutorial will tell you how to make a SUSI action on Google, which can only respond to text messages. However, we also need rich responses such as table and rss responses from SUSI API. We can implement following types of rich responses:

  • List
  • Carousel
  • Suggestion Chip


List:
A list will give the user multiple items arranged vertically. In SUSI webchat table type responses are handled with a list and it looks like this:

Now in order to show to list from SUSI Action on Google, we will use first key (i.e Recipe Name in above screenshot) as a title for each item in the list and rest of keys as a description for that item. For building we will use the following code:

assistant.askWithList(assistant.buildRichResponse()
   .addSimpleResponse('Any text you want to send before list'),
   // Build a list
   assistant.buildList(‘Title of the list here’)
   // First item in list
   .addItems(list[1])
   // Second item in list
   .addItems(list[2])
   // Third item in carousel
   .addItems(list[3])
);

In .addItems we will add the object after populating it from data that we will get from SUSI API and we will populate it with following code:

for (var i = 0; i < metadata.count; i++) {
   list[i] = assistant.buildOptionItem('ID for item', ['keyword','keyword','keyword'])
       .setTitle('title of your item in the list here')
       .setDescription('description of item in list')
}

A list should have at least two items in it. List title and descriptions are optional and title of the item in the list is compulsory. We can also set image for list items in following way.

.setImage('link for the image here', 'Image alternate text')

Setting image in the list is optional and image size will be 48×48 pixels.

Carousel:
Carousel messages are cards that we can scroll horizontally. In SUSI webchat RSS type responses are handled with the list and it looks like this:

In order to show carousel from SUSI Action on Google we will use the following code:

assistant.askWithCarousel(assistant.buildRichResponse()
   .addSimpleResponse('Any text you want to send before carousel'),
   // Build a list
   assistant.buildCarousel()
   // First item in carousel
   .addItems(list[1])
   // Second item in carousel
   .addItems(list[2])
   // Third item in carousel
   .addItems(list[3])
);

Again we will populate .addItems() just like we did for the list. Carousel should have at least two tiles. Just like in the list, the title is compulsory and description and image are optional. Image size in the carousel is 128dp x 232dp. You can set image same as we have set image in the list above.

Suggestion Chip:
Suggestion chip is used to add a hint to a response i.e if you want to give the user a list of suggested responses, we can use suggestion chip. To add suggestion chip to any response you just have to add .addSuggestions([Hint1, Hint2, Hint3, Hint4]) to your response code after .addSimpleResponse()  

See how SUSI Action on Google replies to table and RSS type responses in this video https://youtu.be/rjw-Vg4X57g and If you want to learn more about Actions on Google refer to https://developers.google.com/actions/components/

Resources:
Actions on Google: https://developers.google.com/actions/apiai/

Continue ReadingImplementing Rich Responses in SUSI Action on Google Assistant

Checking Whether Migrations Are Up To Date With The Sqlalchemy Models In The Open Event Server

In the Open Event Server, in the pull requests, if there is some change in the sqlalchemy model, sometimes proper migrations for the same are missed in the PR.

The first approach to check whether the migrations were up to date in the database was with the following health check function:

from subprocess import check_output
def health_check_migrations():
   """
   Checks whether database is up to date with migrations, assumes there is a single migration head
   :return:
   """
   head = check_output(["python", "manage.py", "db", "heads"]).split(" ")[0]
   
   if head == version_num:
       return True, 'database up to date with migrations'
   return False, 'database out of date with migrations'

 

In the above function, we get the head according to the migration files as following:

head = check_output(["python", "manage.py", "db", "heads"]).split(" ")[0]


The table alembic_version contains the latest alembic revision to which the database was actually upgraded. We can get this revision from the following line:

version_num = (db.session.execute('SELECT version_num from alembic_version').fetchone())['version_num']

 

Then we compare both of the given heads and return a proper tuple based on the comparison output.While this method was pretty fast, there was a drawback in this approach. If the user forgets to generate the migration files for the the changes done in the sqlalchemy model, this approach will fail to raise a failure status in the health check.

To overcome this drawback, all the sqlalchemy models were fetched automatically and simple sqlalchemy select queries were made to check whether the migrations were up to date.

Remember that a raw SQL query will not serve our purpose in this case as you’d have to specify the columns explicitly in the query. But in the case of a sqlalchemy query, it generates a SQL query based on the fields defined in the db model, so if migrations are missing to incorporate the said change proper error will be raised.

We can accomplish this from the following function:

def health_check_migrations():
   """
   Checks whether database is up to date with migrations by performing a select query on each model
   :return:
   """
   # Get all the models in the db, all models should have a explicit __tablename__
   classes, models, table_names = [], [], []
   # noinspection PyProtectedMember
   for class_ in db.Model._decl_class_registry.values():
       try:
           table_names.append(class_.__tablename__)
           classes.append(class_)
       except:
           pass
   for table in db.metadata.tables.items():
       if table[0] in table_names:
           models.append(classes[table_names.index(table[0])])

   for model in models:
       try:
           db.session.query(model).first()
       except:
           return False, '{} model out of date with migrations'.format(model)
   return True, 'database up to date with migrations'

 

In the above code, we automatically get all the models and tables present in the database. Then for each model we try a simple SELECT query which returns the first row found. If there is any error in doing so, False, ‘{} model out of date with migrations’.format(model) is returned, so as to ensure a failure status in health checks.

Related:

Continue ReadingChecking Whether Migrations Are Up To Date With The Sqlalchemy Models In The Open Event Server