Adding Fallback Images in SUSI.AI Skill CMS

SUSI.AI Skill CMS shows image of a every skill. Here we are going to talk about a special case, where we handle the case when image is not found. We will be discussing the author’s skill component(all the skills by an author) and how we added fallback image in order to handle all the cases. For displaying image in table displaying all skills of author, we provide the path of image in SUSI Skill Data repository. The path is provided as follows :

let image = 'https://raw.githubusercontent.com/fossasia/susi_skill_data/master/models/general/'+ parse[6]+'/'+parse[7]+'/images/'+parse[8].split('.')[0];

Explanation:
parse is the array which contains the models, language ISO code, and the name of the skill. This is obtained after parsing JSON from this endpoint :

"http://api.susi.ai/cms/getSkillsByAuthor.json?author=" + author;
  • parse[6]: This represents a model of the skill. There are currently six models Assistants, Entertainment, Knowledge, Problem Solving, Shopping and Small Talks.
  • parse[7]: This represents ISO language code of the skill.
  • parse[8]: This represents the name of the skill.

Now the image variable just needs the file extension. We have .jpg and .png extensions in images in our skill data repository. So we made two images :

let image1 = image + '.png';
let image2 = image + '.jpg';

The img tag only takes one attribute in src and we can only add a string in alt tag. Now we needed to check which image exists and add proper src. This can be solved by following methods:

We can use Jquery to solve this:

$.get(image_url)
        .done(function() { 
                // image exists
        }).fail(function() { 
                // Image doesn't exist
    })

This will result in more code and and also this does not handles the case where no image is found and we need to show the Circle Image component which takes first two letters of skill and make a circular component. After researching the internet we found a perfect solution to our problem. There is an npm package named react-image, which is an alternative to default img tag. Features of react-image package helpful to us are:

  • We can provide multiple fallback images in an array as source which will be used in order of index of array. This feature solves our problem of extensions, we add provide image with all extensions.
  • We can show a fallback element in case no images are loaded. This solves our second problem where we needed to show Circle Image component.

Code looks like this:

<Img
  style={imageStyle}
  src={[
       image1,
       image2
      ]}
  unloader={<CircleImage name={name} size="40"/>}
 />

Resources:

Continue ReadingAdding Fallback Images in SUSI.AI Skill CMS

Using Firebase Test Lab for Testing test cases of Phimpme Android

As now we started writing some test cases for Phimpme Android. While running my instrumentation test case, I saw a tab of Cloud Testing in Android Studio. This is for Firebase Test Lab. Firebase Test Lab provides cloud-based infrastructure for testing Android apps. Everyone doesn’t have every devices of all the android versions. But testing on all of them is equally important.

How I used test lab in Phimpme

  • Run your first test on Firebase

Select Test Lab in your project on the left nav on the Firebase console, and then click Run a Robo test. The Robo test automatically explores your app on wide array of devices to find defects and report any crashes that occur. It doesn’t require you to write test cases. All you need is the app’s APK. Nothing else is needed to use Robo test.

Upload your Application’s APK (app-debug-unaligned.apk) in the next screen and click Continue

Configure the device selection, a wide range of devices and all API levels are present there. You can save the template for future use.

Click on start test to start testing. It will start the tests and show the real time progress as well.

  • Using Firebase Test Lab from Android Studio

It required Android Studio 2.0+. You needs to edit the configuration of Android Instrumentation test.

Select the Firebase Test Lab Device Matrix under the Target. You can configure Matrix, matrix is actually on what virtual and physical devices do you want to run your test. See the below screenshot for details.

Note: You need to enable the firebase in your project

So using test lab on firebase we can easily test the test cases on multiple devices and make our app more scalable.

Resources:

Continue ReadingUsing Firebase Test Lab for Testing test cases of Phimpme Android

Generating Map Action Responses in SUSI AI

SUSI AI responds to location related user queries with a Map action response. The different types of responses are referred to as actions which tell the client how to render the answer. One such action type is the Map action type. The map action contains latitude, longitude and zoom values telling the client to correspondingly render a map with the given location.

Let us visit SUSI Web Chat and try it out.

Query: Where is London

Response: (API Response)

The API Response actions contain text describing the specified location, an anchor with text ‘Here is a map` linked to openstreetmaps and a map with the location coordinates.

Let us look at how this is implemented on server.

For location related queries, the key where is used as an identifier. Once the query is matched with this key, a regular expression `where is (?:(?:a )*)(.*)` is used to parse the location name.

"keys"   : ["where"],
"phrases": [
  {"type":"regex", "expression":"where is (?:(?:a )*)(.*)"},
]

The parsed location name is stored in $1$ and is used to make API calls to fetch information about the place and its location. Console process is used to fetch required data from an API.

"process": [
  {
    "type":"console",
    "expression":"SELECT location[0] AS lon, location[1] AS lat FROM locations WHERE query='$1$';"},
  {
    "type":"console",
    "expression":"SELECT object AS locationInfo FROM location-info WHERE query='$1$';"}
],

Here, we need to make two API calls :

  • For getting information about the place
  • For getting the location coordinates

First let us look at how a Console Process works. In a console process we provide the URL needed to fetch data from, the query parameter needed to be passed to the URL and the path to look for the answer in the API response.

  • url = <url>   – the url to the remote json service which will be used to retrieve information. It must contain a $query$ string.
  • test = <parameter> – the parameter that will replace the $query$ string inside the given url. It is required to test the service.

For getting the information about the place, we used Wikipedia API. We name this console process as location-info and added the required attributes to run it and fetch data from the API.

"location-info": {
  "example":"http://127.0.0.1:4000/susi/console.json?q=%22SELECT%20*%20FROM%20location-info%20WHERE%20query=%27london%27;%22",
  "url":"https://en.wikipedia.org/w/api.php?action=opensearch&limit=1&format=json&search=",
  "test":"london",
  "parser":"json",
  "path":"$.[2]",
  "license":"Copyright by Wikipedia, https://wikimediafoundation.org/wiki/Terms_of_Use/en"
}

The attributes used are :

  • url : The Media WIKI API endpoint
  • test : The Location name which will be appended to the url before making the API call.
  • parser : Specifies the response type for parsing the answer
  • path : Points to the location in the response where the required answer is present

The API endpoint called is of the following format :

https://en.wikipedia.org/w/api.php?action=opensearch&limit=1&format=json&search=LOCATION_NAME

For the query where is london, the API call made returns

[
  "london",
  ["London"],
  ["London  is the capital and most populous city of England and the United Kingdom."],
  ["https://en.wikipedia.org/wiki/London"]
]

The path $.[2] points to the third element of the array i.e “London  is the capital and most populous city of England and the United Kingdom.” which is stored in $locationInfo$.

Similarly to get the location coordinates, another API call is made to loklak API.

"locations": {
  "example":"http://127.0.0.1:4000/susi/console.json?q=%22SELECT%20*%20FROM%20locations%20WHERE%20query=%27rome%27;%22",
  "url":"http://api.loklak.org/api/console.json?q=SELECT%20*%20FROM%20locations%20WHERE%20location='$query$';",
  "test":"rome",
  "parser":"json",
  "path":"$.data",
  "license":"Copyright by GeoNames"
},

The location coordinates are found in $.data.location in the API response. The location coordinates are stored as latitude and longitude in $lat$ and $lon$ respectively.

Finally we have description about the location and its coordinates, so we create the actions to be put in the server response.

The first action is of type answer and the text to be displayed is given by $locationInfo$ where the data from wikipedia API response is stored.

{
  "type":"answer",
  "select":"random",
  "phrases":["$locationInfo$"]
},

The second action is of type anchor. The text to be displayed is `Here is a map` and it must be hyperlinked to openstreetmaps with the obtained $lat$ and $lon$.

{
  "type":"anchor",
  "link":"https://www.openstreetmap.org/#map=13/$lat$/$lon$",
  "text":"Here is a map"
},

The last action is of type map which is populated for latitude and longitude using $lat$ and $lon$ respectively and the zoom value is specified to be 13.

{
  "type":"map",
  "latitude":"$lat$",
  "longitude":"$lon$",
  "zoom":"13"
}

Final output from the server will now contain the three actions with the required data obtained from the respective API calls made. For the sample query `where is london` , the actions will look like :

"actions": [
  {
    "type": "answer",
    "language": "en",
    "expression": "London  is the capital and most populous city of England and the United Kingdom."
  },
  {
    "type": "anchor",
    "link":   "https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228",
    "text": "Here is a map",
    "language": "en"
  },
  {
    "type": "map",
    "latitude": "51.51279067225417",
    "longitude": "-0.09184009399817228",
    "zoom": "13",
    "language": "en"
  }
],

This is how the map action responses are generated for location related queries. The complete code can be found at SUSI AI Server Repository.

Resources:

Continue ReadingGenerating Map Action Responses in SUSI AI

Getting skills by an author in SUSI.AI Skill CMS

The skill description page of any skill in SUSI.AI skill cms displays all the details regarding the skill. It displays image, description, examples and name of author. The skill made by author can impress the users and they might want to know more skills made by that particular author.

We decided to display all the skills by an author. We needed an endpoint from server to get skills by author. This cannot be done on client side as that would result in multiple ajax calls to server for each skill of user. The endpoint used is :

"http://api.susi.ai/cms/getSkillsByAuthor.json?author=" + author

Here the author is the name of the author who published the particular skill. We make an ajax call to the server with the endpoint mentioned above and this is done when the user clicks the author. The ajax call response is as follows(example) :

{
 0:       "/home/susi/susi_skill_data/models/general/Entertainment/en/creator_info.txt",
 1: "/home/susi/susi_skill_data/models/general/Entertainment/en/flip_coin.txt",
 2: "/home/susi/susi_skill_data/models/general/Assistants/en/websearch.txt",
session: {
identity: {
type: "host",
name: "139.5.254.154",
anonymous: true
  }
 }
}

The response contains the list of skills made by author. We parse this response to get the required information. We decided to display a table containing name, category and language of the skill. We used map function on object keys to parse information from every key present in JSON response. Every value corresponding to a key represents a response of following type:

"/home/susi/susi_skill_data/models/general/Category/language/name.txt"

Explanation:

  • Category: There are currently six categories Assistants, Entertainment, Knowledge, Problem Solving, Shopping and Small Talks. Each skill falls under a different category.
  • language: This represents the ISO language code of the language in which skill is written.
  • name: This is the name of the skill.

We want these attributes from the string so we have used the split function:

let parse = data[skill].split('/');

data is JSON response and skill is the key corresponding to which we are parsing information. We store the array returned by split function in variable parse. Now we return the following table in map function:

return (
            <TableRow>
               <TableRowColumn>
                   <div>
                      <Img
                         style={imageStyle}
                         src={[
                              image1,
                              image2
                         ]}
                         unloader={<CircleImage name={name} size="40"/>}
                       />
                       {name}
                    </div>
                </TableRowColumn>
                <TableRowColumn>{parse[6]}</TableRowColumn>
                <TableRowColumn>{isoConv(parse[7])}</TableRowColumn>
             </TableRow>
          )

Here :

    • name: The name of skill converted into Title case by the following code :
let name = parse[8].split('.')[0];
name = name.charAt(0).toUpperCase() + name.slice(1);
  • parse[6]: This represents the category of the skill.
  • isoConv(parse[7]): parse[7] is the ISO code of the language of skill and isoConv is an npm package used to get full form of the language from ISO code.
  • CircleImage: This is a fallback option in case image at the URL is not found. This takes first two words from the name and makes a circular component.

After successful execution of the code, we have the following looking table:

Resources:

Continue ReadingGetting skills by an author in SUSI.AI Skill CMS

Skill Editor in SUSI Skill CMS

SUSI Skill CMS is a web application built on ReactJS framework for creating and editing SUSI skills easily. It follows an API centric approach where the SUSI Server acts as an API server. In this blogpost we will see how to add a component which can be used to create a new skill SUSI Skill CMS.

For creating any skill in SUSI we need four parameters i.e model, group, language, skill name. So we need to ask these 4 parameters from the user. For input purposes we have a common card component which has dropdowns for selecting models, groups and languages, and a text field for skill name input.

<SelectField
    floatingLabelText="Model"
    value={this.state.modelValue}
    onChange={this.handleModelChange}
>
    {models}
</SelectField>
<SelectField
    floatingLabelText="Group"
    value={this.state.groupValue}
    onChange={this.handleGroupChange}
>
    {groups}
</SelectField>
<SelectField
    floatingLabelText="Language"
    value={this.state.languageValue}
    onChange={this.handleLanguageChange}
>
    {languages}
</SelectField>
<TextField
    floatingLabelText="Enter Skill name"
    floatingLabelFixed={true}
    hintText="My SUSI Skill"
    onChange={this.handleExpertChange}
/>
<RaisedButton label="Save" backgroundColor="#4285f4" labelColor="#fff" style={{marginLeft:10}} onTouchTap={this.saveClick} />

This is the card component where we get the user input. We have API endpoints on SUSI Server for getting the list of models, groups and languages. Using those APIs we inflate the dropdowns.
Then the user needs to edit the skill. For editing of skills we have used Ace Editor. Ace is an code
editor written in pure javascript. It matches the features native editors like Sublime and TextMate.

To use Ace we need to install the component.

npm install react-ace --save                        

This command will install the dependency and update the package.json file in our project with this dependency.

To use this editor we need to import AceEditor and place it in the render function of our react class.

<AceEditor
    mode=" markup"
    theme={this.state.editorTheme}
    width="100%"
    fontSize={this.state.fontSizeCode}
    height= "400px"
    value={this.state.code}
    name="skill_code_editor"
    onChange={this.onChange}
    editorProps={{$blockScrolling: true}}
/>

Now we have a page that looks something like this

Now we need to handle the click event when a user clicks on the save button.

First we check if the user is logged in or not. For this we check if we have the required cookies and the access token of the user.

 if(!cookies.get('loggedIn')) {
            notification.open({
                message: 'Not logged In',
                description: 'Please login and then try to create/edit a skill',
                icon: <Icon type="close-circle" style={{ color: '#f44336' }} />,
            });
            return 0;
        }

If the user is not logged in then we show him a error notification and asks him to login.

Then we check if he has filled all the required fields like name of the skill etc. and after that we call an API Endpoint on SUSI Server that will finally store the skill in the skill_data_repo.

let url= “http://api.susi.ai/cms/modifySkill.json”
$.ajax({
    url:url,
    dataType: 'jsonp',
    jsonp: 'callback',
    crossDomain: true,
    success: function (data) {
        console.log(data);
        if(data.accepted===true){
            notification.open({
                message: 'Accepted',
                description: 'Your Skill has been uploaded to the server',
                icon: <Icon type="check-circle" style={{ color: '#00C853' }} />,
            });
           }
    }
});

In the success function of ajax call we check if accepted parameter is true from the server or not. If accepted is true then we show user a notification with a message that “Your Skill has been uploaded to the server”.

To see this component running please visit http://skills.susi.ai/skillEditor.

Resources

Material-UI: http://www.material-ui.com/

Ace Editor: https://github.com/securingsincity/react-ace

Ajax: http://api.jquery.com/jquery.ajax/

Universal Cookies: https://www.npmjs.com/package/universal-cookie

Continue ReadingSkill Editor in SUSI Skill CMS

Uploading Images to SUSI Server

SUSI Skill CMS is a web app to create and modify SUSI Skills. It needs API Endpoints to function and SUSI Server makes it possible. In this blogpost, we will see how to add a servlet to SUSI Server to upload images and files.

The CreateSkillService.java file is the servlet which handles the process of creating new Skills. It requires different user roles to be implemented and hence it extends the AbstractAPIHandler.

Image upload is only possible via a POST request so we will first override the doPost method in this servlet.

  @Override
  protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
  resp.setHeader("Access-Control-Allow-Origin", "*"); // enable CORS

resp.setHeader enables the CORS for the servlet. This is required as POST requests must have CORS enables from the server. This is an important security feature that is provided by the browser.

        Part file = req.getPart("image");
        if (file == null) {
            json.put("accepted", false);
            json.put("message", "Image not given");
        }

Image upload to servers is usually a Multipart Request. So we get the part which is named as “image” in the form data.

When we receive the image file, then we check if the image with the same name exists on the server or not.

Path p = Paths.get(language + File.separator + “images/” + image_name);

        if (image_name == null || Files.exists(p)) {
                json.put("accepted", false);
                json.put("message", "The Image name not given or Image with same name is already present ");
            }

If the same file is present on the server then we return an error to the user requesting to give a unique filename to upload.

Image image = ImageIO.read(filecontent);
BufferedImage bi = this.createResizedCopy(image, 512, 512, true);
if(!Files.exists(Paths.get(language.getPath() + File.separator + "images"))){
   new File(language.getPath() + File.separator + "images").mkdirs();
           }
ImageIO.write(bi, "jpg", new File(language.getPath() + File.separator + "images/" + image_name));

Then we read the content for the image in an Image object. Then we check if images directory exists or not. If there is no image directory in the skill path specified then create a folder named “images”.

We usually prefer square images at the Skill CMS. So we create a resized copy of the image of 512×512 dimensions and save that copy to the directory we created above.

BufferedImage createResizedCopy(Image originalImage, int scaledWidth, int scaledHeight, boolean preserveAlpha) {
        int imageType = preserveAlpha ? BufferedImage.TYPE_INT_RGB : BufferedImage.TYPE_INT_ARGB;
        BufferedImage scaledBI = new BufferedImage(scaledWidth, scaledHeight, imageType);
        Graphics2D g = scaledBI.createGraphics();
        if (preserveAlpha) {
            g.setComposite(AlphaComposite.Src);
        }
        g.drawImage(originalImage, 0, 0, scaledWidth, scaledHeight, null);
        g.dispose();
        return scaledBI;
    }

The function above is used to create a  resized copy of the image of specified dimensions. If the image was a PNG then it also preserves the transparency of the image while creating a copy.

Since the SUSI server follows an API centric approach, all servlets respond in JSON.

       resp.setContentType("application/json");
       resp.setCharacterEncoding("UTF-8");
       resp.getWriter().write(json.toString());’

At last, we set the character encoding and the character set of the output. This helps the clients to parse the data easily.

To see this endpoint in live send a POST request at http://api.susi.ai/cms/createSkill.json.

Resources

Apache Docs: https://commons.apache.org/proper/commons-fileupload/using.html

Multipart POST Request Tutorial: http://www.codejava.net/java-se/networking/upload-files-by-sending-multipart-request-programmatically

Java File Upload tutorial: https://ursaj.com/upload-files-in-java-with-servlet-api

Jetty Project: https://github.com/jetty-project/

Continue ReadingUploading Images to SUSI Server

Storing SUSI Settings for Anonymous Users

SUSI Web Chat is equipped with a number of settings by which it can be configured. For a user who is logged in and using the chat, he can change his settings and save them by sending it to the server. But for an anonymous user, this feature is not available while opening the chat every time, unless one stores the settings. Thus to overcome this problem we store the settings in the browser’s cookies which we do by following the steps.

The current User Settings available are :-

  1. Theme – One can change the theme to either ‘dark’ or ‘light’.
  2. To have the option to send a message on enter.
  3. Enable mic to give voice input.
  4. Enable speech output only for speech input.
  5. To have speech output always ON.
  6. To select the default language.
  7. To adjust the speech output rate.
  8. To adjust the speech output pitch.

The first step is to have a default state for all the settings which act as default settings for all users when opening the chat for the first time. We get these settings from the User preferences store available at the file UserPreferencesStore.js

let _defaults = {
                Theme: 'light',
                Server: 'http://api.susi.ai',
                StandardServer: 'http://api.susi.ai',
                EnterAsSend: true,
                MicInput: true,
                SpeechOutput: true,
               SpeechOutputAlways: false,
               SpeechRate: 1,
               SpeechPitch: 1,
           };

We assign these default settings as the default constructor state of our component so that we populate them beforehand.

2. On changing the values of the various settings we update the state through various function handlers which are as follows-

handleSelectChange – handles all the theme changes, handleEnterAsSendhandleMicInput – handles Mic Input as true, handleSpeechOutput – handles whether Speech Output should be enabled or not, handleSpeechOutputAlways – handles if user wants to listen to speech after every input, handleLanguage – handles the language settings related to speech, handleTextToSpeech – handles the Text to Speech Settings including Speech Rate, Pitch.        

  1. Once the values are changed we make use of the universal-cookie package and save the settings in the User’s browser. This is done in a function handleSubmit in the Settings.react.js file.

        

 import Cookies from 'universal-cookie';
     const cookies = new Cookies(); // Create cookie object.
     // set all values
     let vals = {
            theme: newTheme,
            server: newDefaultServer,
            enterAsSend: newEnterAsSend,
            micInput: newMicInput,
            speechOutput: newSpeechOutput,
            speechOutputAlways: newSpeechOutputAlways,
            rate: newSpeechRate,
            pitch: newSpeechPitch,
      }

    let settings = Object.assign({}, vals);
    settings.LocalStorage = true;
    // Store in cookies for anonymous user
    cookies.set('settings',settings);   
  1. Once the values are set in the browser we load the new settings every time the user opens the SUSI Web Chat by having the following checks in the file API.actions.js. We get the values from the cookies using the get function if the person is not logged in and initialise the new settings for that user.
if(cookies.get('loggedIn')===null||
    cookies.get('loggedIn')===undefined){
    // check if not logged in and get the value from the set cookie
    let settings = cookies.get('settings');
    if(settings!==undefined){
      // Check if the settings are set in the cookie
      SettingsActions.initialiseSettings(settings);
    }
  }
else{
// call the AJAX for getting Settings for loggedIn users
}

Resources

Continue ReadingStoring SUSI Settings for Anonymous Users

Persistently Storing loklak Server Dumps on Kubernetes

In an earlier blog post, I discussed loklak setup on Kubernetes. The deployment mentioned in the post was to test the development branch. Next, we needed to have a deployment where all the messages are collected and dumped in text files that can be reused.

In this blog post, I will be discussing the challenges with such deployment and the approach to tackle them.

Volatile Disk in Kubernetes

The pods that hold deployments in Kubernetes have disk storage. Any data that gets written by the application stays only until the same version of deployment is running. As soon as the deployment is updated/relocated, the data stored during the application is cleaned up.


Due to this, dumps are written when loklak is running but they get wiped out when the deployment image is updated. In other words, all dumps are lost when the image updates. We needed to find a solution to this as we needed a permanent storage when collecting dumps.

Persistent Disk

In order to have a storage which can hold data permanently, we can mount persistent disk(s) on a pod at the appropriate location. This ensures that the data that is important to us stays with us, even
when the deployment goes down.


In order to add persistent disks, we first need to create a persistent disk. On Google Cloud Platform, we can use the gcloud CLI to create disks in a given region –

gcloud compute disks create --size=<required size> --zone=<same as cluster zone> <unique disk name>

After this, we can mount it on a Docker volume defined in Kubernetes configurations –

      ...
      volumeMounts:
        - mountPath: /path/to/mount
          name: volume-name
  volumes:
    - name: volume-name
      gcePersistentDisk:
        pdName: disk-name
        fsType: fileSystemType

But this setup can’t be used for storing loklak dumps. Let’s see “why” in the next section.

Rolling Updates and Persistent Disk

The Kubernetes deployment needs to be updated when the master branch of loklak server is updated. This update of master deployment would create a new pod and try to start loklak server on it. During all this, the older deployment would also be running and serving the requests.


The control will not be transferred to the newer pod until it is ready and all the probes are passing. The newer deployment will now try to mount the disk which is mentioned in the configuration, but it would fail to do so. This would happen because the older pod has already mounted the disk.


Therefore, all new deployments would simply fail to start due to insufficient resources. To overcome such issues, Kubernetes allows persistent volume claims. Let’s see how we used them for loklak deployment.

Persistent Volume Claims

Kubernetes provides Persistent Volume Claims which claim resources (storage) from a Persistent Volume (just like a pod does from a node). The higher level APIs are provided by Kubernetes (configurations and kubectl command line). In the loklak deployment, the persistent volume is a Google Compute Engine disk –

apiVersion: v1
kind: PersistentVolume
metadata:
  name: dump
  namespace: web
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: slow
  gcePersistentDisk:
    pdName: "data-dump-disk"
    fsType: "ext4"

[SOURCE]

It must be noted here that a persistent disk by the name of data-dump-index is already created in the same region.


The storage class defines the way in which the PV should be handled, along with the provisioner for the service –

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: slow
  namespace: web
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
  zone: us-central1-a

[SOURCE]

After having the StorageClass and PersistentVolume, we can create a claim for the volume by using appropriate configurations –

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: dump
  namespace: web
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi
  storageClassName: slow

[SOURCE]

After this, we can mount this claim on our Deployment –

  ...
  volumeMounts:
    - name: dump
      mountPath: /loklak_server/data
volumes:
  - name: dump
    persistentVolumeClaim:
      claimName: dump

[SOURCE]

Verifying persistence of Dumps

To verify this, we can redeploy the cluster using the same persistent disk and check if the earlier dumps are still present there –

$ http http://link.to.deployment/dump/
HTTP/1.1 200 OK
Cache-Control: public, max-age=60
Content-Type: text/html;charset=utf-8
...


<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">

<h1>Index of /dump</h1>
<pre>      Name 
[gz ] <a href="messages_20170802_71562040.txt.gz">messages_20170802_71562040.txt.gz</a>   Thu Aug 03 00:07:21 GMT 2017   132M
[gz ] <a href="messages_20170803_69925009.txt.gz">messages_20170803_69925009.txt.gz</a>   Mon Aug 07 15:40:04 GMT 2017   532M
[gz ] <a href="messages_20170807_36357603.txt.gz">messages_20170807_36357603.txt.gz</a>   Wed Aug 09 10:26:24 GMT 2017   377M
[txt] <a href="messages_20170809_27974404.txt">messages_20170809_27974404.txt</a>      Thu Aug 10 08:51:49 GMT 2017  1564M
<hr></pre>
...

Conclusion

In this blog post, I discussed the process of deployment of loklak with persistent dumps on Kubernetes. This deployment is intended to work as root.loklak.org in near future. The changes were proposed in loklak/loklak_server#1377 by @singhpratyush (me).

Resources

Continue ReadingPersistently Storing loklak Server Dumps on Kubernetes

Parsing SUSI.AI Blog Feed

Our SUSI.AI web chat is improving every day.Recently we decided to add a blog page to our SUSI.AI web chat to show all the latest blogs related to SUSI.AI. These blogs are written by our developer team. We have the following feed from which we need to parse the information of blogs and display:

http://blog.fossasia.org/tag/susi-ai/feed/

This feed is in RSS (Really Simple Syndication) format. We decided to use Google feed API service to parse this RSS content, but that service is now deprecated. Then we decided to convert this RSS into JSON format and then we will parse the information.
We have used the rss2json API for converting our RSS content into JSON. We make an ajax call to this API for fetching the JSON content:

$.ajax({
        url: 'https://api.rss2json.com/v1/api.json',
        method: 'GET',
        dataType: 'json',
        data: {
            'rss_url': 'http://blog.fossasia.org/tag/susi-ai/feed/',
            'api_key': api_key: '0000000000000000000000000000000000000000', // put your api key here,
            'count': 50
        }
        }).done(function (response) {
            if(response.status !== 'ok'){ throw response.message; }
            this.setState({ posts: response.items, postRendered: true});
        }.bind(this));

Explanation:

  • URL: This is base URL of API to which we are making calls in order to fetch JSON data.
  • rss_url: This is the URL of RSS feed which needs to be converted into JSON format.
  • api_key: This is the key, which can be generated after making an account on the website
  • count: Count of feed items to return, the default is 20.

The converted JSON response looks like:

This can be checked here.

Now we have used cards of material-ui to show the content of this JSON response. On the success of our ajax call, we update the array(named as posts) present in the initial state of our component with response.items( an array containing information of blogs).

We map through each element in an array named posts and return the corresponding card, containing relevant information in it. We parsed the following information from JSON:

  • Name of the author: posts.author
  • Title of the blog: posts.title
  • Link to blog on WordPress: posts.link

Publish date of the blog:
The publish date is in this format: “2017-08-05 09:05:27” (example), we need to format this date. We used following code to do that:

let date = posts.pubDate.split(' ');
let d = new Date(date[0]);
dateFormat(d, 'dddd, mmmm dS, yyyy') // dateFormat is an npm package

This converts “2017-08-05 09:05:27” to “Saturday, August 5th, 2017”.

Description and Featured Images of the blog:
We needed to show a short description of the blog and a featured image. There is an element in our posts array with name description that contains HTML, starting from featured image and followed by a short description. We needed to convert this HTML into simple text for using this. I used htmlToText npm package for this purpose:
let description = htmlToText.fromString(posts.description).split(‘…’);
description variable now contains simple text. A simple example :

[https://blog.fossasia.org/wp-content/uploads/2017/08/image2-768×442.png] Our SUSI.AI Web Chat has many static pages like Overview, Devices, Team and Support.We have separate CSS files for each component. Recently, we faced a problem regarding design pattern where CSS files of one component were affecting another component. This blog is all about solving this issue and we take an example of distortion

Now, this text contains both link for featured image and our short description text. For getting the link for featured image, I have used regex(Regular Expression) in the following code and saved the link in the variable image and for short description I have used the split function :

let text = description[0].split(']');
let image = susi // temporary image for initialisation
let regExp = /\[(.*?)\]/;
let imageUrl = regExp.exec(description[0]);
if(imageUrl) {
   image = imageUrl[1]
}

After successfully parsing this information from JSON, we can have cards with details and card looks like this:

As we need all this to be rendered when the component mounts, so we put our ajax call inside componentDidMount() function of our react component.

Resources:

Continue ReadingParsing SUSI.AI Blog Feed