Open Event Server: Testing Image Resize Using PIL and Unittest

FOSSASIA‘s Open Event Server project uses a certain set of functions in order to resize image from its original, example to thumbnail, icon or larger image. How do we test this resizing of images functions in Open Event Server project? To test image dimensions resizing functionality, we need to verify that the the resized image dimensions is same as the dimensions provided for resize.  For example, in this function, we provide the url for the image that we received and it creates a resized image and saves the resized version.

def create_save_resized_image(image_file, basewidth, maintain_aspect, height_size, upload_path,
                              ext='jpg', remove_after_upload=False, resize=True):
    """
    Create and Save the resized version of the background image
    :param resize:
    :param upload_path:
    :param ext:
    :param remove_after_upload:
    :param height_size:
    :param maintain_aspect:
    :param basewidth:
    :param image_file:
    :return:
    """
    filename = '{filename}.{ext}'.format(filename=get_file_name(), ext=ext)
    image_file = cStringIO.StringIO(urllib.urlopen(image_file).read())
    im = Image.open(image_file)

    # Convert to jpeg for lower file size.
    if im.format is not 'JPEG':
        img = im.convert('RGB')
    else:
        img = im

    if resize:
        if maintain_aspect:
            width_percent = (basewidth / float(img.size[0]))
            height_size = int((float(img.size[1]) * float(width_percent)))

        img = img.resize((basewidth, height_size), PIL.Image.ANTIALIAS)

    temp_file_relative_path = 'static/media/temp/' + generate_hash(str(image_file)) + get_file_name() + '.jpg'
    temp_file_path = app.config['BASE_DIR'] + '/' + temp_file_relative_path
    dir_path = temp_file_path.rsplit('/', 1)[0]

    # create dirs if not present
    if not os.path.isdir(dir_path):
        os.makedirs(dir_path)

    img.save(temp_file_path)
    upfile = UploadedFile(file_path=temp_file_path, filename=filename)

    if remove_after_upload:
        os.remove(image_file)

    uploaded_url = upload(upfile, upload_path)
    os.remove(temp_file_path)

    return uploaded_url


In this function, we send the
image url, the width and height to be resized to, and the aspect ratio as either True or False along with the folder to be saved. For this blog, we are gonna assume aspect ratio is False which means that we don’t maintain the aspect ratio while resizing. So, given the above mentioned as parameter, we get the url for the resized image that is saved.
To test whether it has been resized to correct dimensions, we use Pillow or as it is popularly know, PIL. So we write a separate function named getsizes() within which get the image file as a parameter. Then using the Image module of PIL, we open the file as a JpegImageFile object. The JpegImageFile object has an attribute size which returns (width, height). So from this function, we return the size attribute. Following is the code:

def getsizes(self, file):
        # get file size *and* image size (None if not known)
        im = Image.open(file)
        return im.size


As we have this function, it’s time to look into the unit testing function. So in unit testing we set dummy width and height that we want to resize to, set aspect ratio as false as discussed above. This helps us to test that both width and height are properly resized. We are using a creative commons licensed image for resizing. This is the code:

def test_create_save_resized_image(self):
        with app.test_request_context():
            image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'
            width = 500
            height = 200
            aspect_ratio = False
            upload_path = 'test'
            resized_image_url = create_save_resized_image(image_url_test, width, aspect_ratio, height, upload_path, ext='png')
            resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
            resized_width, resized_height = self.getsizes(resized_image_file)


In the above code from
create_save_resized_image, we receive the url for the resized image. Since we have written all the unittests for local settings, we get a url with localhost as the server set. However, we don’t have the server running so we can’t acces the image through the url. So we build the absolute path to the image file from the url and store it in resized_image_file. Then we find the sizes of the image using the getsizes function that we have already written. This  gives us the width and height of the newly resized image. We make an assertion now to check whether the width that we wanted to resize to is equal to the actual width of the resized image. We make the same check with height as well. If both match, then the resizing function had worked perfectly. Here is the complete code:

def test_create_save_resized_image(self):
        with app.test_request_context():
            image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'
            width = 500
            height = 200
            aspect_ratio = False
            upload_path = 'test'
            resized_image_url = create_save_resized_image(image_url_test, width, aspect_ratio, height, upload_path, ext='png')
            resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
            resized_width, resized_height = self.getsizes(resized_image_file)
            self.assertTrue(os.path.exists(resized_image_file))
            self.assertEqual(resized_width, width)
            self.assertEqual(resized_height, height)


In open event orga server, we use this resize function to basically create 3 resized images in various modules, such as events, users,etc. The 3 sizes are names – Large, Thumbnail and Icon. Depending on the one more suitable we use it avoiding the need to load a very big image for a very small div. The exact width and height for these 3 sizes can be changed from the admin settings of the project. We use the same technique as mentioned above. We run a loop to check the sizes for all these. Here is the code:

def test_create_save_image_sizes(self):
        with app.test_request_context():
            image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'
            image_sizes_type = "event"
            width_large = 1300
            width_thumbnail = 500
            width_icon = 75
            image_sizes = create_save_image_sizes(image_url_test, image_sizes_type)

            resized_image_url = image_sizes['original_image_url']
            resized_image_url_large = image_sizes['large_image_url']
            resized_image_url_thumbnail = image_sizes['thumbnail_image_url']
            resized_image_url_icon = image_sizes['icon_image_url']

            resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
            resized_image_file_large = app.config.get('BASE_DIR') + resized_image_url_large.split('/localhost')[1]
            resized_image_file_thumbnail = app.config.get('BASE_DIR') + resized_image_url_thumbnail.split('/localhost')[1]
            resized_image_file_icon = app.config.get('BASE_DIR') + resized_image_url_icon.split('/localhost')[1]

            resized_width_large, _ = self.getsizes(resized_image_file_large)
            resized_width_thumbnail, _ = self.getsizes(resized_image_file_thumbnail)
            resized_width_icon, _ = self.getsizes(resized_image_file_icon)

            self.assertTrue(os.path.exists(resized_image_file))
            self.assertEqual(resized_width_large, width_large)
            self.assertEqual(resized_width_thumbnail, width_thumbnail)
            self.assertEqual(resized_width_icon, width_icon)

Resources:
Continue ReadingOpen Event Server: Testing Image Resize Using PIL and Unittest

Know the Usage of all Emoji’s on Twitter with loklak Emoji Heatmap

Loklak apps page now has a new app in its store, Emoji Heatmap. This app can be used to see the usage of all the emoji’s in the tweets all over the world in the form of heatmap. So, the major difference between the emoji heatmap and emoji heatmapper apps are heatmapper shows the tweets related to specific search query whereas this heatmapper app, it displays all the tweets which contains emojis.

How do the App fetches and stores the locations

The emoji heatmap uses the loklak search API . The search API needs a query in order to search and output the JSON data. But this app takes no input from the user end to search any query. To make the search dynamic, we are using an existing JSON file from emojiHeatmapper app and loklak-fetcher-client javascript file. From the emoji.json file, we collect the each query item and search it using loklak-fetcher-client. The output json file which we get from loklak-fetcher-client is retrieved into the emojiHeatmap and we extract the location parameter. The location parameter is then stored into the “feature” option of open layers 3 maps.

So, here in the emoji Heatmap app, we iterate over the emoji.json, get different search query each time when we search for it using loklak search API.

Code which adds the location retrieved into feature

  $.getJSON("../emojiHeatmapper/emoji.json", function(json) {
    for (var i = 0; i < json.data.length; i++){
      var query = json.data[i][1];
      // Fetch loklak API data, and fill the vector
      loklakFetcher.getTweets(query, function(tweets) {
        for(var i = 0; i < tweets.statuses.length; i++) {
          if(tweets.statuses[i].location_point !== undefined){
          // Creation of the point with the tweet's coordinates
          //  Coords system swap is required: OpenLayers uses by default
          //  EPSG:3857, while loklak's output is EPSG:4326
            var point = new ol.geom.Point(ol.proj.transform(tweets.statuses[i].location_point, 'EPSG:4326', 'EPSG:3857'));
            vector.addFeature(new ol.Feature({  // Add the point to the data vector
              geometry: point,
              weight: 20
            }));
          }
        }
      });
    }
  });

 

The above function gets has two variables query and point. The query variable stores the data that is being retrieved from the emoji.json file each time it iterates and that query is being sent into the loklak-fetcher-client. Then the point variable is in which the location tracked using the loklak search API is converted into the co-ordinates system followed by the Open Layers 3. Then the point is added as a feature to the map vector. The map vector is the place where all the features are stored and then appended onto the map as a heatmap.

Resources

Continue ReadingKnow the Usage of all Emoji’s on Twitter with loklak Emoji Heatmap

Utilizing Readiness Probes for loklak Dependencies in Kubernetes

When we use any application and fail to connect to it, we do not give up and retry connecting to it again and again. But in the reality we often face this kind of obstacles like application that break instantly or when connecting to an API or database that is not ready yet, the app gets upset and refuses to continue to work. So, something similar to this was happening with api.loklak.org.
In such cases we can’t really re-write the whole application again every time the problem occurs.So for this we need to define dependencies of some kind that can handle the situation rather than disappointing the users of loklak app.

Solution:

We will just wait until a dependent API or backend of loklak is ready and then only start the loklak app. For this to be done, we used Kubernetes Health Checks.

Kubernetes health checks are divided into liveness and readiness probes.
The purpose of liveness probes are to indicate that your application is running.
Readiness probes are meant to check if your application is ready to serve traffic.
The right combination of liveness and readiness probes used with Kubernetes deployments can:
Enable zero downtime deploys
Prevent deployment of broken images
Ensure that failed containers are automatically restarted

Pod Ready to be Used?

A Pod with defined readiness probe won’t receive any traffic until a defined request can be successfully fulfilled. This health checks are defined through the Kubernetes, so we don’t need any changes to be made in our services (APIs). We just need to setup a readiness probe for the APIs that loklak server is depending on. Here you can see the relevant part of the container spec you need to add (in this example we want to know when loklak is ready):

        readinessProbe:
          httpGet:
            path: /api/status.json
            port: 80
          initialDelaySeconds: 30
          timeoutSeconds: 3

 

Readiness Probes

Updating deployments of loklak when something pushed into development without readiness probes can result in downtime as old pods are replaced by new pods in case of Kubernetes deployment. If the new pods are misconfigured or somehow broken, that downtime extends until you detect the problem and rollback.
With readiness probes, Kubernetes will not send traffic to a pod until the readiness probe is successful. When updating a loklak deployment, it will also leave old one’s running until probes have been successful on new copy. That means that if loklak server new pods are broken in some way, they’ll never see traffic, instead old pods of loklak server will continue to serve all traffic for the deployment.

Conclusion

Readiness probes is a simple solution to ensure that Pods with dependencies do not get started before their dependencies are ready (in this case for loklak server). This also works with more than one dependency.

Resources

Continue ReadingUtilizing Readiness Probes for loklak Dependencies in Kubernetes

How Anonymous Mode is Implemented in SUSI Android

Login and signup are an important feature for some android apps like chat app because user wants to save messages and secure messages from others. In SUSI Android we save messages for logged in user on the server corresponding to their account. But  users can also  use the app without logging in. In this blog, I will show you how the anonymous mode is implemented in SUSI Android .

When the user logs in using the username and password we provide a token to user for a limited amount of time, but in case of anonymous mode we never provide a token to the user and also we set ANONYMOUS_LOGGED_IN flag true which shows that the user is using the app anonymously.

PrefManager.clearToken()

PrefManager.putBoolean(Constant.ANONYMOUS_LOGGED_IN, true)

We use ANONYMOUS_LOGGED_IN flag to check user is using the app anonymously or not. When a user opens the app we first check user is already logged in or not. If the user is not logged in then we check ANONYMOUS_LOGGED_IN flag is true or false. The true means user is using the app in anonymous mode.

if(PrefManager.getBoolean(Constant.ANONYMOUS_LOGGED_IN, false)) {

 intent = new Intent(LoginActivity.this, MainActivity.class);

startActivity(intent);

}

Else we show login page to the user. The user can use app either by login using username and password or anonymously by clicking skip button. On clicking skip button ANONYMOUS_LOGGED_IN flag set to true.

public void skip() {

Intent intent = new Intent(LoginActivity.this,MainActivity.class);

PrefManager.clearToken();

PrefManager.putBoolean(Constant.ANONYMOUS_LOGGED_IN, true);

startActivity(intent);

}

If the user is using the app in anonymous mode but he/she want to login then he/she can login. There is an option for login in menu.

When the user selects login option from the menu, then it redirects the user to the login screen and ANONYMOUS_LOGGED_IN flag is set to false. ANONYMOUS_LOGGED_IN flag is set to false to ensure that instead of login if the user closes the app and again open it, then he/she can’t use the app until logged in or click skip button.

case R.id.action_login:

PrefManager.putBoolean(Constant.ANONYMOUS_LOGGED_IN, false);

Reference

Continue ReadingHow Anonymous Mode is Implemented in SUSI Android

How Settings of SUSI Android App Are Saved on Server

The SUSI Android allows users to specify whether they want to use the action button of soft keyboard as carriage return or else send action. The user can use SUSI.AI on different client like Android , iOS, Web. To give user uniform experience, we need to save user settings on the server so that if the user makes any change in setting in any chat client then that it changes in other chat clients too. So every chat client must store user specific data on the server to make sure that all chat clients access this data and maintain the same state for that particular user and must accordingly push and pull user data from and to the server to update the state of the app.

We used special key to store setting information on server. For eg.

Setting Key Value Use
Enter As Send enter_send true/false true means pressing enter key send message and false means pressing enter key adds new line.
Mic Input mic_input true/false true means default input method is speech but supports keyboard input too. false means the only input method is keyboard input.
Speech Always speech_always true/false true means we get speech output irrespective of input type.
Speech Output speech_output true/false true means we get speech output in case of speech input and false means we don’t get speech output.
Theme theme dark/light dark means theme is dark and light means theme is light

How setting is stored to server

Whenever user settings are changed, the client updates the changed settings on the server so that the state is maintained across all chat clients. When user change setting, we send three parameters to the server ‘key’, ‘value’ and ‘token’. For eg. let ‘Enter As Send’ is set to false. When user changes it from false to true, we immediately update it on the server. Here key will be ‘enter_send’ and value will be ‘true’.

The endpoint used to add or update User Settings is :

BASE_URL+’/aaa/changeUserSettings.json?key=SETTING_NAME&value=SETTING_VALUE&access_token=ACCESS_TOKEN’

SETTING_NAME’ is the key of the corresponding setting, ‘SETTING_VALUE’ is it’s updated value and ‘ACCESS_TOKEN’ is used to find correct user account. We used the retrofit library for network call.

settingResponseCall = ClientBuilder().susiApi .changeSettingResponse(key, value,  PrefManager.getToken())

If the user successfully updated the setting on the server then server reply with message ‘You successfully changed settings of your account!’

How setting is retrieved from server

We retrieve setting from the server when user login. The endpoint used to fetch User Settings is

BASE_URL+’/aaa/listUserSettings.json?access_token=ACCESS_TOKEN’

It requires “ACCESS_TOKEN” to retrieve setting data for a particular user. When user login, we use getUserSetting method to retrieve setting data from the server. PrefManager.getToken() is used to get “ACCESS_TOKEN”.

userSettingResponseCall = ClientBuilder().susiApi .getUserSetting(PrefManager.getToken())

We use userSettingResponseCall to get a response of ‘UserSetting’ type using which we can retrieve different setting from the server. ‘UserSetting’ contain ‘Session’ and ‘Settings’ and ‘Settings’ contain the value of all settings. We save the value of all settings on the server in string format, so after retrieving settings we convert them into the required format. For eg. ‘Enter As Send’ value is of boolean format, so after retrieving we convert it to boolean format.

var settings: Settings ?= response.body().settings

utilModel.setEnterSend((settings?.enterSend.toString()).toBoolean())

Reference

Continue ReadingHow Settings of SUSI Android App Are Saved on Server

How to Show Preview of Any Link in SUSI Android App

Sometimes in a chat app like Whatsapp, messenger we find that scrolling over a link, shows a preview of the link. Android-Link-Preview is an open source library which can be used to show the preview of a link. This library is developed by Leonardo Cardoso. This library makes preview from an url using all the information such as title, relevant texts and images. TextCrawler is the main class of this library. As the name suggests this class is used to crawl given link and grab relevant information. In this blog, I will show you how I used this library in SUSI Android to show the preview of the link.

How to include this library in your project

To use android-link-preview in our project we must add the dependency of the library in build.gradle(project) file

repositories {
  maven { url https://github.com/leonardocardoso/mvn-repo/raw/master/maven-deploy’   }
}

and build.gradle(module) file.

dependencies {
   compile org.jsoup:jsoup:1.8.3 // required
   compile com.leocardz:link-preview:1.2.1@aar
}

Now import these packages in class where you want to use link preview.

import com.leocardz.link.preview.library.LinkPreviewCallback;

import com.leocardz.link.preview.library.SourceContent;

import com.leocardz.link.preview.library.TextCrawler;

As the name suggests LinkpreviewCallback is a callback which has two important methods onPre() and onPos(). onPre() method called when library starts crawling the web to find relevant information and onPos() method called when crawling finished. This library uses SourceContent class to save and retrieve data.

LinkPreviewCallback linkPreviewCallback = new LinkPreviewCallback() {

@Override

public void onPre() { }

@Override

public void onPos(final SourceContent sourceContent, boolean b) {}

}

TextCrawler class is the main class which use makePreview() method to start web crawling.

makePreview() method needs two parameters

  • linkPreviewCallback : Instance of LinkPreviewCallback class.
  • Link : Link of web page to crawl.
TextCrawler textCrawler = new TextCrawler();

textCrawler.makePreview(linkPreviewCallback, link);

So when we call makePreview method, this library starts crawling the web. But before that, it checks if given link is URL of any image or not. If given link is URL of any image, then there is no need of crawling the web because we can’t find any other information from that link, but if it is not url of any image then it starts crawling the web. All websites are written in HTML. It uses jsoup library to parse HTML documents and extract useful data from that web page. After that it store different information separately i.e it store ‘title’, ‘description’ and ‘image url’ separately using setTitle(), setDescription() and setImage() methods of SourceContent class respectively.

sourceContent.setTitle(metaTags.get(“title”));

When it finishes all these tasks, it call onPos() method of LinkPreview class to inform that it has done its task and now we can use stored data using getTitle(), getDescription() and getImage() methods inside onPos() method.

sourceContent.getTitle();

Example :

We are using Android-link-preview in our SUSI Android app

Reference

Continue ReadingHow to Show Preview of Any Link in SUSI Android App

Characteristization of Transistors Using PSLab

Transistors are one of the key building blocks of all electronics. They are fundamentally three-terminal semiconductor devices, with the terminals being labelled as the Emitter(E), Base(B), and Collector(C). These active components are found everywhere in electronics, and all of the complex processors that power everything from cellphones to aircraft employ millions of these devices in switching and amplification roles. In this blog post, we shall use the PSLab to explore some of the fundamental properties of transistors, and their various applications.

Transistor as an amplifier

In the schematic shown, we shall try to use an NPN transistor to amplify a small signal.

A small amplitude oscillation generated by W1 with the amplitude knob turned down to a very low level is used as the input. Since transistors do not handle bipolar signals, we have mixed a constant DC voltage generated PV3 to shift this small signal into the positive domain.

The fluctuating potential difference incident at the base of the transistor creates a corresponding current flow between the Base and Emitter.

By the fundamental property of transistors, this influences the path resistance between the Collector(C) and the Emitter(E) , and the resultant amplified voltage output can be monitored at the junction between the 1K resistor and the collector.

We have used CH1 to monitor the input voltage, and CH2 for the output

In a more applied scenario, we can implement the second schematic in order to create an audio amplifier. Instead of using W1 as the input signal, a speaker is used as a microphone. When a sound signal is incident on the speaker, its membrane oscillates, and as a result , the coil attached to it also does the same. Since this coil is placed in a magnetic field , its oscillations result in a change in the magnetic flux passing through, and this change causes a voltage(EMF) induced at its output. We then use our transistor amplifier to amplify this small EMF

Figure 2 : A Transistor being used to apply a gain of 81.5x to a small amplitude sine wave. The input waveform (green) is shown on a +/-500mV full scale and the output waveform is shown on a +/-8V scale in order to be able to view both. However, due to the difference in scales, the actual difference in amplitudes is 16 times more than what is visible.
Common Emitter Characteristics
Schematic Diagram

Any introductory course on transistors includes a diagram similar to the one shown , and a description about how for any base current, the collector current eventually saturates, and that this saturation level is proportional to the base current itself.

With the PSLab’s transistor CE characterization app, we can set up this experiment, and verify this for ourselves using an NPN transistor. The results shown were gathered using a 2N2222 transistor

In the schematic , the base current is determined by the voltage source PV2, and a high value series resistor of 200 KOhms . We use an analog input CH3 to monitor the voltage present at the Base of the transistor in order to calculate the total base current.

Base current = V/R = (PV2 – CH3) / 200e3

Now that we have set a particular base current, PV1 is used to sequentially increase the voltage across the collector and emitter of the transistor. A current limiting resistor of 1K Ohm is used, and CH1 is used to monitor the voltage drop across the transistor.

Collector current = V/R = (PV1 – CH1) / 1e3

Plotting the behaviour of the collector current with respect to the collector voltage gives us the familiar current voltage characteristics of a transistor.

Figure 3: Common emitter characteristics of an NPN transistor (2N2222) for various base currents

We can now alter the base current by changing PV2, and verify that the saturation current for the collector is indeed a function of it.

Resources

 

Continue ReadingCharacteristization of Transistors Using PSLab

Skill History Component in Susi Skill CMS

SUSI Skill CMS is an editor to write and edit skill easily. It is built on ReactJS framework and follows an API centric approach where the Susi server acts as API server. Using Skill CMS we can browse history of a skill, where we get commit ID, commit message and name the author who made the changes to that skills. In this blog post, we will see how to add skill revision history component in Susi Skill CMS.

One text file represents one skill, it may contain several intents which all belong together. Susi skills are stored in susi_skill_data repository. We can access any skill based on four tuples parameters model, group, language, skill.

<Menu.Item key="BrowseRevision">
 <Icon type="fork" />
   Browse Skills Revision
 <Link to="/browseHistory"></Link>
</Menu.Item>

First let’s create a option in sidebar menu, and link it “/browseHistory” route created at index.js 

 <Route path="/browseHistory" component={BrowseHistory} />

Next we will be adding skill versioning using endpoints provided by Susi Server, to select a skill we will create a drop down list, for this we will be using Select Field a component of  Material UI.

request('http://cors-anywhere.herokuapp.com/api.susi.ai/cms/getModel.json').then((data) => {
    console.log(data.data);
    data = data.data;
    for (let i = 0; i < data.length; i++) {
        models.push(<MenuItem value={i} key={data[i]} primaryText={`${data[i]}`}/>);
    }

    console.log(models);
});
 <SelectField
   floatingLabelText="Expert"
   style={{width: '50px'}}
   value={this.state.value}
   onChange={this.handleChange}>
      {experts}
  </SelectField>

We store the models, groups and languages in array using the endpoints api.susi.ai/cms/getModel.json, api.susi.ai/cms/getGroups.json, api.susi.ai/cms/getAllLanguages.json set the values in respective select fields. The request functions takes the url as string and the parses the json and fetches the object containing data or error depending on the response from the server. Once run your project using

npm start

And you would be able to see the drop down list working

Next, we will use Material UI tables for displaying the organized data. For using  table component we need to import table, it’s body, header and row column from Material ui class. 

import {Table, TableBody, TableHeader, TableHeaderColumn, TableRow, TableRowColumn} from "material-ui/Table";

We then make our header Columns, in our case it’s three, namely Commit ID, Commit Message, Author name.

 <TableRow>
  <TableHeaderColumn tooltip="Commit ID">Commit ID</TableHeaderColumn>
  <TableHeaderColumn tooltip="Commit Message">Commit Message</TableHeaderColumn>
  <TableHeaderColumn tooltip="Author Name">Author Name</TableHeaderColumn>
 </TableRow>

To get the history of modification of a skill, we will use endpoint “http://api.susi.ai/cms/getSkillHistory.json”. It uses JGit for managing version control in skill data repository. JGit is a library which implements the Git functionality in Java. An endpoint is accessed based on userRoles, which can be Admin, Privilege, User, Anonymous. In our case it is Anonymous. Thus a User need not to log in to access this endpoint.

 url = "http://api.susi.ai/cms/getSkillHistory.json?model="+models[this.state.modelValue].key+"&group="+groups[this.state.groupValue].key+"&language="+languages[this.state.languageValue].key+"&skill="+this.state.expertValue;

After getting the url, we will next make a ajax network call to get the modification history of skill., if the method returns success, we get the desired data in table array and display it in rows through its render() method, which checks if the data is set in state -if so, it renders the contents otherwise we display the error occurred while processing the request.

 $.ajax({
            url: url,
            jsonpCallback: 'pccd',
            dataType: 'jsonp',
            jsonp: 'callback',
            crossDomain: true,
            success: function(data) {
                data = data.commits;
                let array = [];
                for(let i=0;i<data.length;i++){
                    array.push(data[i]);
                }
{tableData.map((row, index) => (
  <TableRow key={index}>
    <TableRowColumn>{row.commitID}</TableRowColumn>
    <TableRowColumn>{row.commit_message}</TableRowColumn>
    <TableRowColumn>{row.author}</TableRowColumn>
   </TableRow>
))}

Test the final output  on http://skills.susi.ai/browseHistory or http://localhost:3000/browseHistory , select the model, group , language and skill and get the history of that skill.


Next time when you need drop down list or tables to organize your data, do check out https://github.com/callemall/material-ui for examples, which can help in providing good outlines to you apps. For contributions to susi_skill_cms, join our chat channel  on gitter: https://gitter.im/fossasia/susi_server and browse https://github.com/fossasia/susi_skill_cms for complete code.

Resources

Continue ReadingSkill History Component in Susi Skill CMS

Getting Response Feedback In SUSI.AI Web Chat

The SUSI.AI Web Chat provides responses for various queries, but the quality of responses it not always the best possible. Machine learning and deep learning algorithms will help us to solve this step by step. In order to implement machine learning, we need feedback mechanisms. The first step in this direction is to provide users with a way to give feedback to responses with a “thumbs up” or “thumbs down”. In this blog, I explain how we fetch the feedback of responses from the server.

On asking a query like tell me a quote, Susi responses with following message:

Now the user can rate the response by pressing thumbs up or thumbs down button. We store this response on the server. For getting this count of feedback we use the following endpoint:

BASE_URL+'/cms/getSkillRating.json?'+'model='+model+'&group='+group+'&skill='+skill;

Here:

  • BASE_URL: Base URL of our server: http://api.susi.ai/
  • model: Model of the skill from which response is fetched. For example “general”.
  • group: The group of the skill from which response is fetched. For example “entertainment”.
  • skill: name of the skill from which response is fetched. For example “quotes”.

We make an ajax call to the server to fetch the data:

$.ajax({
          url: getFeedbackEndPoint,
          dataType: 'jsonp',
          crossDomain: true,
          timeout: 3000,
          async: false,
          success: function (data) {
            console.log(getFeedbackEndPoint)
            console.log(data);
            if(data.accepted) {
              let positiveCount = data.skill_rating.positive;
              let negativeCount = data.skill_rating.negative;
              receivedMessage.positiveFeedback = positiveCount;
              receivedMessage.negativeFeedback = negativeCount;
            }

}

In the success function, we receive the data, which is in jsonp format. We parse this to get the desired result and store it in variable positiveCount and negativeCount. An example of data response is :

In the client, we can get value corresponding to positive and negative key as follows :

let positiveCount = data.skill_rating.positive;
let negativeCount = data.skill_rating.negative;

This way we can fetch the positive and negative counts corresponding to a particular response. This data can be used in many ways, for example:

  • It can be used to display the number of positive and negative count next to the thumbs:

  • It can be used in machine learning algorithms to improve the response that SUSI.AI provides.

Resources:

Testing Link:

http://chat.susi.ai/

Continue ReadingGetting Response Feedback In SUSI.AI Web Chat

Integrating Twitter Authenticating using Twitter4j in Phimpme Android Application

We have used Twitter4j API to authenticate Twitter in Phimpme application. Below are the following steps in setting up the Twitter4j API in Phimpme and Login to Twitter from Phimpme android application.

Setting up the environment

Download the Twitter4j package from http://twitter4j.org/en/. For sharing images we will only need twitter4j-core-3.0.5.jar and twitter4j-media-support-3.0.5.jar files. Copy these files and save it in the libs folder of the application.

Go to build.gradle and add the following codes in dependencies:

dependencies {
compile files('libs/twitter4j-core-3.0.5.jar')
compile files('libs/twitter4j-media-support-3.0.5.jar')
}

Adding Phimpme application in Twitter development page

Go to https://dev.twitter.com/->My apps-> Create new apps. Create an application window opens where we have to fill all the necessary details about the application. It is mandatory to fill all the fields. In website field, if you are making an android application then anything can be filled in website field for example www.google.com. But it is necessary to fill this field also.

After filling all the details click on “Create your Twitter application” button.

Adding Twitter Consumer Key and Secret Key

This generates twitter consumer key and twitter secret key. We need to add this in our string.xml folder.

<string name="twitter_consumer_key">ry1PDPXM6rwFVC1KhQ585bJPy</string>
<string name="twitter_consumer_secret">O3qUqqBLinr8qrRvx3GXHWBB1AN10Ax26vXZdNlYlEBF3vzPFt</string> 

Twitter Authentication

Make a new JAVA class say LoginActivity. Where we have to first fetch the twitter consumer key and Twitter secret key.

private static Twitter twitter;
    private static RequestToken requestToken;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_twitter_login);
        twitterConsumerKey = getResources().getString(R.string.twitter_consumer_key);
        twitterConsumerSecret = getResources().getString(R.string.twitter_consumer_secret);  

We are using a web view to interact with the Twitter login page.

twitterLoginWebView = (WebView)findViewById(R.id.twitterLoginWebView);
        twitterLoginWebView.setBackgroundColor(Color.TRANSPARENT);
        twitterLoginWebView.setWebViewClient( new WebViewClient(){
            @Override
            public boolean shouldOverrideUrlLoading(WebView view, String url){

                if( url.contains(AppConstant.TWITTER_CALLBACK_URL)){
                    Uri uri = Uri.parse(url);
                    LoginActivity.this.saveAccessTokenAndFinish(uri);
                    return true;
                }
                return false;
            }             

If the access Token is already saved then the user is already signed in or else it sends the Twitter consumer key and the Twitter secret key to gain access Token. ConfigurationBuilder function is used to set the consumer key and consumer secret key.

ConfigurationBuilder configurationBuilder = new ConfigurationBuilder();
        configurationBuilder.setOAuthConsumerKey(twitterConsumerKey);     configurationBuilder.setOAuthConsumerSecret(twitterConsumerSecret);
        Configuration configuration = configurationBuilder.build();
        twitter = new TwitterFactory(configuration).getInstance();

It is followed by the following Runnable thread to check if the request token is received or not. If authentication fails, an error Toast message pops.

new Thread(new Runnable() {
            @Override
            public void run() {
                try {
                    requestToken = twitter.getOAuthRequestToken(AppConstant.TWITTER_CALLBACK_URL);
                } catch (Exception e) {
                    final String errorString = e.toString();
                    LoginActivity.this.runOnUiThread(new Runnable() {
                        @Override
                        public void run() {
                            mAlertBuilder.cancel();
                            Toast.makeText(LoginActivity.this, errorString, Toast.LENGTH_SHORT).show();
                            finish();
                        }
                    });
                    return;
                }

                LoginActivity.this.runOnUiThread(new Runnable() {
                    @Override
                    public void run() {
                        twitterLoginWebView.loadUrl(requestToken.getAuthenticationURL());
                    }
                });
            }
        }).start();

Conclusion

It offers seamless integration of Twitter in any application. Without leaving actual application, easier to authenticate. Further, it is used to upload the photo to Twitter directly from Phimpme Android application, fetch profile picture and username.

Github

Resources

 

Continue ReadingIntegrating Twitter Authenticating using Twitter4j in Phimpme Android Application