Adding JSON API support to ember-models-table in Open Event Front-end

Open Event Front-end project uses ember-models-table for handling all the table components in the application. Although ember-models-table is great for handling server requests for operations like pagination, sorting & filtering, but it does not support JSON API used in the Front-end project.

In this blog we will see how we integrated JSON API standards to ember-models-table. Lets see how we added support for JSON API to table and made requests to the Open Event Orga-server.

Adding JSON API support for filtering & sorting

The JSON API specs follow a strict structure for supporting meta data & filtering options, the server expects an array of objects for specifying the name of the field, operation and the value for filtering. The name attribute specifies the column for which we need to apply the filter. eg we use `name` for the events name in the. `op` attribute specifies the operation to be used for filtration, `val` attribute is used to provide a value for comparison. You can check the list of all the supported operations here.

For implementation of filter we will check if the column filter is being used i.e if the filter string is empty or not, if the string is not empty we add a filter object of the column using the specified specs, else we remove the filter object of the column.

if (filter) {
  query.filter.pushObject({
    name : filterTitle,
    op   : 'ilike',
    val  : `%${filter}%`
  });
} else {
  query.filter.removeObject({
    name : filterTitle,
    op   : 'ilike',
    val  : `%${filter}%`
  });
}

For sort functionally we need to pass a query parameter called `sort` which is a string value in the URL. Sorting can be done in ascending or descending order for which the server expects different values. We pass `sort=name` & `sort=-name` for sorting in ascending order & descending order respectively.

const sortSign = {
  none : '',
  asc  : '-',
  desc : ''
};
let sortedBy = get(column, 'sortedBy');
if (typeOf(sortedBy) === 'undefined') {
  sortedBy = get(column, 'propertyName');
}

Adding support for pagination

The pagination in JSON API is implemented using query parameters `page[size]` & `page[number]` which specify the size of the page & the current page number respectively eg

page[size]=10&page[number]=1

This will load the first ten events from the server in the application.

Once the data is loaded in the application we calculate the number of pages to be rendered. The response from the server has attached meta-data which contains the total number of the events in the following structure:

meta: {
  count: 100
}

We calculate the number of pages by dividing the total count by the size of the page. We check if the number of items is greater than the pageSize, and calculate the number of the pages using the formula `items / pagesize + (items % pagesize ? 1 : 0)`. If the items are less than the pageSize we do not have to calculate the pages and we simply hide the pagination in the footer.

if (pageSize > items) {
  this.$('.pagination').css({
    display: 'none'
  });
} else {
  this.$('.pagination').removeAttr('style');
  pages = parseInt((items / pageSize));
  if (items % pageSize) {
    pages = pages + 1;
  }
}

Adding dynamic routing support to ember-models-table

We may want to use the ember-models-table for dynamic routes like `events/list` route, where we load live, drafted & past events based on the current route. The ember-models-table by default do not support the dynamic routes. To add this we override the didReceiveAttrs() method of the component which is executed every time the component updates. We add reset the pageSize, currentPageNumber and the content of the table, as the routes change.

didReceiveAttrs() {
  set(this, 'pageSize', 10);
  set(this, 'currentPageNumber', 1);
  set(this, 'filteredContent', get(this, 'data'));
}

The result of this we now have tables supporting JSON API in the Open Event Front-end application

Thank you for reading the blog, you can check the source code for the example here.

Resources

Continue ReadingAdding JSON API support to ember-models-table in Open Event Front-end

Implementation of SUSI Web Chat Auto Sizing Message Composer

While we are using SUSI Web Chat Application we may have to send lengthy messages. Existing application’s Message composer supports for lengthy messages but it manages a constant value for every user input. While we were developing the application we got a requirement to build a growing message composer.

Final output of this implementation produces a message composer that grows when user completes a new line until user completes 5 lines and after 5 lines it maintains a fixed size and enables scrolling.

So we tried several packages to get this done. And finally we did this  using react-textarea-autosize  it gives all these features and it gives user to customize the elements furthermore.

First we have to install the npm package:

npm install --save react-textarea-autosize

After the installation we have to import the package on top of the “MessageComposer.react.js”

import TextareaAutosize from 'react-textarea-autosize';

Next we need to use this package like this,

         <TextareaAutosize
           className='scroll'
           id='scroll'
           minRows={1}
           maxRows={5}
           placeholder="Type a message..."
           value={this.state.text}
           onChange={this._onChange.bind(this)}
           onKeyDown={this._onKeyDown.bind(this)}
           ref={(textarea) => { this.nameInput = textarea; }}
           style={{ background: this.props.textarea}}
         />

 

This package provides “minRows” and “maxRows”  attributes and we can define minimum height of the text area and maximum height it can grow. If you need to know more about auto growing text areas and to get examples refer this.

Next we wanted to hide the scrollbar which is displaying when the textarea height is exceeding.

How we hide the scrollbars  on chrome browsers.

.scroll::-webkit-scrollbar {
 	 display: none;
}

This is how we hide the scrollbar on firefox browser.

.scroll {
 	overflow: -moz-scrollbars-none;
}

Now we have to style up the textarea because it comes with default styles. We wrapped up the textarea with the div and applied our styles to that. In my case we wrapped up my textarea with  <div className=“textBack”>

This is how we styled the textarea using the wrapper div.

.textBack{
 background: #fff;
 width: 83%;
 border-radius: 40px;
 padding: 5px 20px;
 display: block;
 position: relative;
 top: 12%;
 box-sizing: content-box;
 margin: 0px 0 10px 0;
}

Our textarea is like this.

It expands when user exceeds the width of textarea.

This is how we implemented the SUSI Web Chat’s growing message composer. If you would like to contribute please fork our repository on github  

Resources:

Continue ReadingImplementation of SUSI Web Chat Auto Sizing Message Composer

Implementing Tree View in PSLab Android App

When a task expands over sub tasks, it can be easily represented by a stem and leaf diagram. In the context of android it can be implemented using an expandable list view. But in a scenario where the subtasks has mini tasks appended to it, it is hard to implement it using the general two level expandable list views. PSLab android application supports many experiments to perform using the PSLab device. These experiments are divided into major sections and each experiments are listed under them.

The best way to implement this functionality in the android application is using a multi layer treeview implementation. In this context three layers are enough as follows;


This was implemented with the help from a library called AndroidTreeView. This blog will outline how to modify and implement it in PSLab android application.

Basic Idea

Tree view implementation simply follows the data structure “Tree” used in algorithms. Every tree has a root where it starts and from the root there will be branches which are connected using edges. Every edge will have a parent and child. To reach a child, one has to traverse through only one route.

Setting Up Dependencies

Implementing tree view begins with setting up dependencies in the gradle file in the project.

compile 'com.github.bmelnychuk:atv:1.2.+'

Creating UI for tree view

The speciality about this implementation is that it can be loaded into any kind of a layout such as a linearlayout, relativelayout, framelayout etc.

final TreeNode Root = TreeNode.root();
Root.addChildren(
       // Add child nodes here
);
// Set up the tree view
AndroidTreeView experimentsListTree = new AndroidTreeView(getActivity(), Root);
experimentsListTree.setDefaultAnimation(true);
[LinearLayout/RelativeLayout].addView(experimentsListTree.getView());

Creating a node holder

Trees are made of a collection of tree nodes. A holder for a tree node can be created using an object which extends the BaseNodeViewHolder class provided by the library. BaseNodeViewHolder requires a holder class which is generally static so that it can be accessed without creating an instance which nests textviews, imageviews and buttons.

Once the holder extends the BaseNodeViewHolder, it should override two methods as follows;

@Override
public View createNodeView(final TreeNode node, ClassContainingNodeData header) {

}

@Override
public void toggle(boolean active) {

}

createNodeView() which inflate the view and toggle() method which can be used to toggle clicks on the tree node in the UI.

The following code snippet shows how to create an object which extends the above mentioned class with the overridden methods.

public class ExperimentHeaderHolder extends TreeNode.BaseNodeViewHolder<ExperimentHeaderHolder.ExperimentHeader> {

    private ImageView arrow;

    public ExperimentHeaderHolder(Context context) {
            super(context);
    }

    @Override
    public View createNodeView(final TreeNode node, ExperimentHeader header) {

            final LayoutInflater inflater = LayoutInflater.from(context);
            final View view = inflater.inflate(R.layout.header_holder, null, false);

            TextView title = (TextView) view.findViewById(R.id.title);
            title.setText(header.title);

            arrow = (ImageView) view.findViewById(R.id.experiment_arrow);
        
            return view;
    }

    @Override
    public void toggle(boolean active) {
            arrow.setImageResource(active ? arrow_drop_up : arrow_drop_down);
    }

    public static class ExperimentHeader {

            public String title;

            public ExperimentHeader(String title) {
               this.title = title;
            }
    }
}

Creating a TreeNode

Once the holder is complete, we can move on to creating an actual tree node. TreeNode class requires an object which extends the BaseNodeViewHolder class as mentioned earlier. Also it requires a viewholder which it can use to inflate the view in the tree layout. The viewholder can be a different class. The importance of this different implementation can be explained as follows;

TreeNode treeNode = new TreeNode(new ExperimentHeaderHolder.ExperimentHeader(“Title”))
       .setViewHolder(new ExperimentHeaderHolder(context));

In the Saved Experiments section of PSLab android application, all the three levels shouldn’t implement the toggle behavior as a user clicks on the experiment (last level item), he doesn’t expect the icon to change like the ones in headers where an arrow points up and down when he clicks on it. In this case we can reuse a holder which has the title attribute while creating only a holder which does not override the toggle function to ignore icon toggling at the last level of the tree view. This explanation can be illustrated using a code snippet as follows;

new TreeNode(new ExperimentHeaderHolder.ExperimentHeader(“Title”))
       .setViewHolder(new IndividualExperimentHolder(context));

Creating parent nodes and finally the Root node

The final part of the implementation is to create parent nodes to group up similar experiments together. The TreeNode object supports a method call addChild() and addChildren(). addChild() method allows adding one tree node to the specific tree node and addChildren() method allows adding many tree nodes at the same time. Following code snippet illustrates how to add many tree nodes to a node and make it a parent node.

treeDiodeExperiments.addChildren(treeZener, treeDiode, treeDiodeClamp, treeDiodeClip, treeHalfRectifier, treeFullWave);

Setting a click listener

Click listener is a very important implementation. Each tree node can be attached with a click listener using the interface provided by the library as follows;

treeNode.setClickListener(new TreeNode.TreeNodeClickListener() {
   @Override
   public void onClick(TreeNode node, Object value) {

   }
});

The value object is the class attached to the holder and its attributes can be retireved by casting it to the specific class using casting methods;

String title = ((ExperimentHeaderHolder.ExperimentHeader) value).title;

Resources:

Continue ReadingImplementing Tree View in PSLab Android App

Caching Elasticsearch Aggregation Results in loklak Server

To provide aggregated data for various classifiers, loklak uses Elasticsearch aggregations. Aggregated data speaks a lot more than a few instances from it can say. But performing aggregations on each request can be very resource consuming. So we needed to come up with a way to reduce this load.

In this post, I will be discussing how I came up with a caching model for the aggregated data from the Elasticsearch index.

Fields to Consider while Caching

At the classifier endpoint, aggregations can be requested based on the following fields –

  • Classifier Name
  • Classifier Classes
  • Countries
  • Start Date
  • End Date

But to cache results, we can ignore cases where we just require a few classes or countries and store aggregations for all of them instead. So the fields that will define the cache to look for will be –

  • Classifier Name
  • Start Date
  • End Date

Type of Cache

The data structure used for caching was Java’s HashMap. It would be used to map a special string key to a special object discussed in a later section.

Key

The key is built using the fields mentioned previously –

private static String getKey(String index, String classifier, String sinceDate, String untilDate) {
    return index + "::::"
        + classifier + "::::"
        + (sinceDate == null ? "" : sinceDate) + "::::"
        + (untilDate == null ? "" : untilDate);
}

[SOURCE]

In this way, we can handle requests where a user makes a request for every class there is without running the expensive aggregation job every time. This is because the key for such requests will be same as we are not considering country and class for this purpose.

Value

The object used as key in the HashMap is a wrapper containing the following –

  1. json – It is a JSONObject containing the actual data.
  2. expiry – It is the expiry of the object in milliseconds.

class JSONObjectWrapper {
    private JSONObject json; 
    private long expiry;
    ... 
}

Timeout

The timeout associated with a cache is defined in the configuration file of the project as “classifierservlet.cache.timeout”. It defaults to 5 minutes and is used to set the eexpiryof a cached JSONObject –

class JSONObjectWrapper {
    ...
    private static long timeout = DAO.getConfig("classifierservlet.cache.timeout", 300000);

    JSONObjectWrapper(JSONObject json) {
        this.json = json;
        this.expiry = System.currentTimeMillis() + timeout;
    }
    ...
}

 

Cache Hit

For searching in the cache, the previously mentioned string is composed from the parameters requested by the user. Checking for a cache hit can be done in the following manner –

String key = getKey(index, classifier, sinceDate, untilDate);
if (cacheMap.keySet().contains(key)) {
    JSONObjectWrapper jw = cacheMap.get(key);
    if (!jw.isExpired()) {
        // Do something with jw
    }
}
// Calculate the aggregations
...

But since jw here would contain all the data, we would need to filter out the classes and countries which are not needed.

Filtering results

For filtering out the parts which do not contain the information requested by the user, we can perform a simple pass and exclude the results that are not needed.

Since the number of fields to filter out, i.e. classes and countries, would not be that high, this process would not be that resource intensive. And at the same time, would save us from requesting heavy aggregation tasks from the user.

Since the data about classes is nested inside the respective country field, we need to perform two level of filtering –

JSONObject retJson = new JSONObject(true);
for (String key : json.keySet()) {
    JSONArray value = filterInnerClasses(json.getJSONArray(key), classes);
    if ("GLOBAL".equals(key) || countries.contains(key)) {
        retJson.put(key, value);
    }
}

Cache Miss

In the case of a cache miss, the helper functions are called from ElasticsearchClient.java to get results. These results are then parsed from HashMap to JSONObject and stored in the cache for future usages.

JSONObject freshCache = getFromElasticsearch(index, classifier, sinceDate, untilDate);
cacheMap.put(key, new JSONObjectWrapper(freshCache));

The getFromElasticsearch method finds all the possible classes and makes a request to the appropriate method in ElasticsearchClient, getting data for all classifiers and all countries.

Conclusion

In this blog post, I discussed the need for caching of aggregations and the way it is achieved in the loklak server. This feature was introduced in pull request loklak/loklak_server#1333 by @singhpratyush (me).

Resources

Continue ReadingCaching Elasticsearch Aggregation Results in loklak Server

Upload Images to OwnCloud and NextCloud in Phimpme Android

As increasing the stack of account manager in Phimpme Android. We have now two new items OwnCloud and NextCloud to add. Both are open source storage services. Provides complete source code of their official apps and libraries on Github. You can check below

OwnCloud: https://github.com/owncloud

NextCloud: https://github.com/nextcloud

This requires a hosting server, where you can deploy it and access it through their web app and Mobile apps. I added a feature in Phimpme to upload images directly to the server right from the app using their android-library.

Steps (How I did in Phimpme)

  • Add library in Application gradle file

Firstly, to work with, we need to add the android-library they provide.

compile "com.github.nextcloud:android-library:$rootProject.nextCloudVersion"

Check the new version from here and apply over it: https://github.com/nextcloud/android-library/releases

  • Login from Account Manager

As per our Phimpme app flow, User first connect itself from the account manager and then share image from app using these credentials. Added a new Login activity for OwnCloud and NextCloud both.

          

  • Saved credentials in Database

To use that further in android-library, I store the credentials in Realm database.

account.setServerUrl(data.getStringExtra(getString(R.string.server_url)));
account.setUsername(data.getStringExtra(getString(R.string.auth_username)));
account.setPassword(data.getStringExtra(getString(R.string.auth_password)));
  • Uploading image using library

As per the official guide of OwnCloud, used Created an object of OwnCloudClient. Set the username and password.

private OwnCloudClient mClient;
mClient = OwnCloudClientFactory.createOwnCloudClient(serverUri, this, true);
mClient.setCredentials(
       OwnCloudCredentialsFactory.newBasicCredentials(
               username,
               password
       )
);

Passed the image path which we are getting in the SharingActivity. Modified with adding the separator.

File fileToUpload = new File(saveFilePath);
String remotePath = FileUtils.PATH_SEPARATOR + fileToUpload.getName();

Used the UploadRemoteOperation Class and just need to pass the path, mimeType and timeStamp. The library have already defined functions to execute the upload operations.

UploadRemoteFileOperation uploadOperation =
       new UploadRemoteFileOperation(fileToUpload.getAbsolutePath(), remotePath, mimeType, timeStamp);
uploadOperation.execute(mClient, this, mHandler);

  • Setup Account using Docker and Digital Ocean

I have already a previous blog post on how to setup NextCloud or OwnCloud account on server using Digital Ocean and Docker.

Link: https://blog.fossasia.org/how-to-use-digital-ocean-and-docker-to-setup-test-cms-for-phimpme/

Resource:

  1. NextCloud Developer Mannual: https://docs.nextcloud.com/server/9/developer_manual/index.html
  2. OwnCloud Library installation: https://doc.owncloud.org/server/9.0/developer_manual/android_library/library_installation.html
  3. Examples: https://doc.owncloud.org/server/9.0/developer_manual/android_library/examples.html
Continue ReadingUpload Images to OwnCloud and NextCloud in Phimpme Android

Common Utility classes Progress Bar and Snack Bar in Phimpme Android

As the Phimpme Android is scaling very fast on its features, code gets redundant sometimes. Some of the widely used design widgets in Android are Progress Bar and Snack Bar. Progress Bar is shown to user when some process is happening in the background. Snackbar is a feedback operation to user of its recent process. In other words we can say Snackbar is the new toast in Android with a cool feature of setting action on them. So that User can interact with the feedback received on the process.

As In Phimpme lots of account Login and Logout progress happens. Uploading success and failure required Snackbar to show to the Users. So to remove the redundancy of the boilerplate of these codes, I added two Utilities class one is Phimpme ProgressbarHandler and other is SnackbarHandler in the app. Below is one by one code and explanation of both.

Progress Bar Handler

In the constructor I passed Context as parameter. Created a ViewGroup object and set view of android. Setting the progress bar style and length using Android core attributes such as progressBarStyleLarge and duration to setIndeterminate true.

private ProgressBar mProgressBar;

public PhimpmeProgressBarHandler(Context context) {
   ViewGroup layout = (ViewGroup) ((Activity) context).findViewById(android.R.id.content)
           .getRootView();

   mProgressBar = new ProgressBar(context, null, android.R.attr.progressBarStyleLarge);
   mProgressBar.setIndeterminate(true);



   RelativeLayout.LayoutParams params = new
           RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.MATCH_PARENT,
           RelativeLayout.LayoutParams.MATCH_PARENT);

   RelativeLayout rl = new RelativeLayout(context);

   rl.setGravity(Gravity.CENTER);
   rl.addView(mProgressBar);

   layout.addView(rl, params);

   hide();

}

Next is used dynamically created Relative Layout object and setup the parameters for width and height as MATCH_PARENT. Setting gravity of the layout to center and added the progress bar view on it using the addView method. So basically we have a progress bar ready and we dynamically created a relative layout and added the view over it.

The function used in setting up the views and progress bar are from AOSP only.

After that a Progressbar is set, we now need functions to show and hide the progress bar in the code. Created two functions show() and hide().

public void show() {
   mProgressBar.setVisibility(View.VISIBLE);
}

public void hide() {
   mProgressBar.setVisibility(View.INVISIBLE);
}

These functions set the visibility of the the progress bar.

Usage:

Now in any class we can create object of our Progressbar handler class pass the context on it and use the show() and hide() methods wherever we want to show this and hide. Below is the code snippet to show the illustration.

phimpmeProgressBarHandler = new PhimpmeProgressBarHandler(this);

phimpmeProgressBarHandler.show();

phimpmeProgressBarHandler.hide();

Snackbar Handler

To do this, I created a separate class as Snackbar Handler. What we can do is to create a static function show() and inside the declaration, we can create an object of Snackbar and apply the styles to that.

As you can see in the code snippet below, I Created a static function with parameters such as View (to take the view instance), String (to show the message) and duration  of the Snackbar. Set Up the text, textsize and action on the snackbar. An “OK” action is predefined in the function only.

public static void show(View view, String text, int duration) {
   final Snackbar snackbar = Snackbar.make(view, text, duration);
   View sbView = snackbar.getView();
   TextView textView = (TextView)sbView.findViewById(android

.support.design.R.id.snackbar_text);
   textView.setTextColor(Color.WHITE);
   textView.setTextSize(12);
   snackbar.setAction("OK", new View.OnClickListener() {
       @Override
       public void onClick(View view) {
           snackbar.dismiss();
       }
   });
   snackbar.show();
}

Usage:

To use this directly call the show method pass the view and String of the message which you want to show on Snackbar. There are overloaded methods as well in which you can pass the durations. See the below code as example.

SnackBarHandler.show(parentLayout, getString(R.string.no_account_signed_in));

Resources

Continue ReadingCommon Utility classes Progress Bar and Snack Bar in Phimpme Android

Adding Transition Effect Using RxJS And CSS In Voice Search UI Of Susper

 

Susper has been given a voice search feature through which it provides a user with a better experience of search. We introduced to enhance the speech-recognition user interface by adding transition effects. The transition effect was required to display appropriate messages according to voice being detected or not. The following messages were:

  • When a user should start a voice search, it should display ‘Speak Now’ message for 1-2 seconds and then show up with message ‘Listening…’ to acknowledge user that now it is ready to recognize the voice which will be spoken.
  • If a user should do not speak anything, it should display ‘Please check audio levels or your microphone working’ message in 3-4 seconds and should exit the voice search interface.

The idea of speech UI was taken from the market leader and it was implemented in a similar way. On the homepage, it looks like this:

On the results page, it looks like this:

For creating transitions like, ‘Listening…’ and ‘Please check audio levels and microphone’ messages, we used CSS, RxJS Observables and timer() function.

Let’s start with RxJS Observables and timer() function.

RxJS Observables and timer()

timer() is used to emit numbers in sequence in every specified duration or after a given duration. It acts as an observable. For example:

let countdown = Observable.timer(2000);
The above code will emit value of countdown in 2000 milliseconds. Similarly, let’s see another example:
let countdown = Observable.timer(2000, 6000);
The above code will emit value of countdown in 2000 milliseconds and subsequent values in every 6000 milliseconds.
export class SpeechToTextComponent implements OnInit {
  message: any = ‘Speak Now’;
  timer: any;
  subscription: any;
  ticks: any;
  miccolor: any = #f44;
}
ngOnInit() {
  this.timer = Observable.timer(1500, 2000);
  this.subscription = this.timer.subscribe(t => {
  this.ticks = t;// it will throw listening message after 1.5   sec
  if (t === 1) {
    this.message = Listening;
  }// subsequent events will be performed in 2 secs interval
  // as it has been defined in timer()
  if (t === 4) {
    this.message = Please check your microphone audio levels.;
    this.miccolor = #C2C2C2;
}// if no voice is given, it will throw audio level message
// and unsubscribe to the event to exit back on homepage
  if (t === 6) {
    this.subscription.unsubscribe();
    this.store.dispatch(new speechactions.SearchAction(false));
  }
 });
}
The above code will throw following messages at a particular time. For creating the text-animation effect, most developers go for plain javascript. The text-animation effects can also be achieved by using pure CSS.

Text animation using CSS

@webkitkeyframes typing {from {width:0;}}
.spch {
  fontweight: normal;
  lineheight: 1.2;
  pointerevents: none;
  position: none;
  textalign: left;
  –webkitfontsmoothing: antialiased;
  transition: opacity .1s easein, marginleft .5s easein,                  top  0s linear 0.218s;
  –webkitanimation: typing 2s steps(21,end), blinkcaret .5s                       stepend infinite alternate;
  whitespace: nowrap;
  overflow: hidden;
  animationdelay: 3.5s;
}
@keyframes specifies animation code. Here width: 0; tells that animation begins from 0% width and ends to 100% width of the message. Also, animation-delay: 3.5s has been adjusted w.r.t timer to display messages with animation at the same time.
This is how it works now:

The source code for the implementation can be found in this pull request: https://github.com/fossasia/susper.com/pull/663

Resources:

 

 

Continue ReadingAdding Transition Effect Using RxJS And CSS In Voice Search UI Of Susper

Use of Flux Architecture to Switch between Themes in SUSI Web Chat

While we were developing the SUSI Web Chat we got a requirement to build a dark theme. And to build a way that user can switch between dark and light theme.

SUSI Web Chat application is made according to the Flux architecture, So we had to build sub components according to that architecture.

What is flux:

Flux is not a framework. It is an Architecture/pattern that we can use to build applications using react and some other frameworks. Below figure shows the way how that architecture works and how it communicate.

How flux works:

Flux has four types of components. Those are views, actions, dispatcher and stores. We use JSX to build and integrate views into our JavaScript code.

When someone triggers an event from view, it triggers an action and that action sends particular action details  such as Actiontype, action name  and data to dispatcher. Dispatcher broadcasts those details to every store which are registered with the particular dispatcher. That means every store gets all the action details and data which are broadcasting from dispatchers which they are registered.

Let’s say we have triggered an action from view that is going to change the value of the store. Those action details are coming to dispatcher. Then dispatcher broadcasts those data to every store that registered with it. Matching action will trigger and update its value. If there is any change happened in store, view automatically updates corresponding view.

How themes are changing:

We have a store called “SettingStore.js”. This “SettingStore” contains theme values like current theme. We store other settings of the application such as: Mic input settings, Custom server details, Speech Output details, Default Language, etc.it is responsible to provide these details to corresponding view.

let CHANGE_EVENT = 'change';
class SettingStore extends EventEmitter {
   constructor() {
       super();
       this.theme = true; 
   }

We use “this.theme = true” in constructor to switch light theme as the default theme.

getTheme() { //provides current value of theme
       return this.theme;
   }

This method returns the value of the current theme when it triggers.

   changeTheme(themeChanges) {
       this.theme = !this.theme;
       this.emit(CHANGE_EVENT);
   }

We use “changeTheme” method to change the selected theme.

   handleActions(action) {
       switch (action.type) {
           case ActionTypes.THEME_CHANGED: {
               this.changeTheme(action.theme);
               break;
           }
           default: {
               // do nothing
           }
       }
   }
}

This switch case is the place that store gets actions distributed from the dispatcher and executes relevant method.

const settingStore = new SettingStore();
ChatAppDispatcher.register(settingStore.handleActions.bind(settingStore));
export default settingStore;

Here we registered our store(SettingStore) to “ChatAppDispatcher” .

This is how Store works.
Now we need to get the default light theme to the view. For that we have to call ”getTheme()” function. We can use the value it returns to update the state of the application.
Now we are going to change the theme. To do that we have to trigger “changeTheme” method of “Settingstrore” from view ( MessageSection.react.js ).
We trigger below method on click of the “Change Theme” button. It triggers the action called “themeChanged”.

 themeChanger(event) {
   Actions.themeChanged(!this.state.darkTheme);
 }

Previous method executes “themeChanged()” function of the actions.js file.

export function themeChanged(theme) {
 ChatAppDispatcher.dispatch({
   type: ActionTypes.THEME_CHANGED,
   theme //data came from parameter
 });
};

In above function we collect data from the view and send those data, method details to dispatcher.
Dispatcher sends those details to each and every registered store. In our case we have “SettingStore” and update the state to new dark theme.
This is how themes are changing in SUSI AI Web Chat application. Check this link to see the preview.

Resources:

  • Read About Flux: https://facebook.github.io/flux/
  • GitHub repository: https://github.com/fossasia/chat.susi.ai
  • Live Web Chat: http://chat.susi.ai/
Continue ReadingUse of Flux Architecture to Switch between Themes in SUSI Web Chat

High School Physics Practicals using PSLab Device

High school physics syllabus includes rather advanced concepts in the world of Physics; namely practicals related to sound, heat, light and electric. Almost all of these practicals are done in physics labs using conventional bulky equipments. Especially the practicals related to electrical and electronics. They use bulky oscilloscopes with complex controls and multimeter with so many controls. PSLab being a sophisticated device which integrates a whole science lab into a 2” by 2” small circuit, these practicals made easy and interesting.

There are several practicals required to be completed by a student before sitting for GCE (Advanced Level) examination. This blog post points out how to perform those experiments using PSLab device and several other tools required without having to use bulky instruments.

  • Calculating internal resistance and EMF of a dry cell

This experiment includes the above mentioned circuitry. Once the basic setup is completed using;

  • Dry cell
  • 10K variable resistor
  • 1K resistor
  • Switch/Jumper

Connect the CH1 pin and GND pin of PSLab across the dry cell and CH2 pin and GND across the Ammeter instead of the Ammeter. Once the pin connection is complete, plug the PSLab device to the computer and using the desktop application voltmeter, measure the voltage from CH1 and current from voltage value read from CH2 and the known resistance value of R. Record these data against step changes in the resistance of the variable resistor starting from a high value to a low value. Once there are sufficient data as much as 10 data set, using a graph paper draw the relation between current and voltage measured according to the equation given below.

E = V + Ir

V = -rI + E

This will produce a graph with a constant negative gradient. Gradient of the graph will produce the internal resistance of the dry cell.

  • I-V graphs for a semiconductor

This experiment will require;

  • Diode
  • 100 Ohms resistor

Conduct the experiment as follows once the circuit is set up. Increase the voltage provided by the PV1 pin of the PSLab by 100 mV per step and measure the voltage across diode using CH3 pin of PSLab device. Using that voltage and the supply voltage, calculate the current flowing through the resistor. Note down the readings as voltage to current pairs and plot them on a graph paper. The graph will produce the following results which is identical to common IV characteristics of a semiconductor.

Start A at zero and increase by 0.1V steps while measuring voltage and current across diode. (Image from Wikipedia)

Apart from this conventional experiment, PSLab provides an inbuilt experiment students and teachers can use. In the Electronics Experiments section, Diode IV characteristics experiment is configured in such a way that this experiment can be conducted very easily. The circuit configuration is the same and R value should be 1K. Once the step size is set to an appropriate resolution, START button will begin the experiment and provide with the final graph.

  • Transfer characteristics of a transistor

Set up the above mentioned circuitry on a breadboard and connect the appropriate pins of PSLab device as mentioned. In this experiment, student need to observe the increase in current/voltage across CH1 when PV2 is varying. We need to measure CH1 parameter value against PV2 voltage value and plot these two against to obtain the transfer characteristics of a BJT.

Apart from this conventional experiment, PSLab provides an inbuilt experiment to perform this lab exercise. Using “BJT Transfer Characteristics” experiment, students and teachers can easily obtain the characteristics curves by setting step sizes and voltage values to the required pins.

  • Ohm’s law

Configure the following circuit and connect PCS, CH1 and GND pins of PSLab into relevant positions. This experiment is to measure voltage across the resistor against the variation in current through the circuit. Increase current provided to the circuit using PCS pin and measure voltage using CH1 pin and plot against the current. The gradient will produce the resistor value.

In the electrical experiments section in PSLab application has a separate experiment to perform the Ohm’s Law experiment. Using this experiment, students can change the current values using the knob and record the voltage values related to that. Finally the plot can be generated which will be of a constant gradient.

  • Kirchhoff’s Laws
  • Voltage law

Kirchhoff’s Voltage law states that the directed sum of the voltage around any closed network is zero.

This law can be proved using the following circuit with the help of PSLab device. Connect the channel pins as per the diagram and measure voltages in the three channels. They will show a relationship as the directed sum of the combination will turn into a zero.

  • Current law

Kirchhoff’s current law states that at any node in a circuit, the sum of currents flowing into that node is equal to the sum of currents flowing out of that node; which is similar to the direct summation of currents in a node is zero.

Using the same circuit as above, measure the current flowing through each node using the voltage readings of the channels and known resistor values. Considering the sign of the current which is easily readable in PSLab desktop application, the summation will produce a zero.

Resources:

Continue ReadingHigh School Physics Practicals using PSLab Device

How to integrate SUSI AI in Google Assistant

Google Assistant is a personal assistant by Google. Google Assistant can interact with other applications. In fact, we can integrate our own applications with Google Assistant using Actions on Google. In order to integrate SUSI action with Google Assistant we will follow these steps:

  1. Install Node.js from the link below on your computer if you haven’t installed it already. https://nodejs.org/en/
  2. Create a folder with any name and open a shell and change your current directory to the new folder you created. 
  3. Type npm init in the shell and enter details like name, version and entry point. 
  4. Create a file with the same name that you wrote in entry point in above-given step. i.e index.js and it should be in the same folder you created. 
  5. Type following commands in command line  npm install –save actions-on-google. After actions-on-google is installed, type npm install –save express.  On success, type npm install –save request, and at last type npm install –save body-parser. When all the modules are installed check your package.json file; therein you’ll see the modules in the dependencies portion.

    Your package.json file should look like this

    {
     "name": "susi_google",
     "version": "1.0.0",
     "description": "susi on google",
     "main": "app.js",
     "scripts": {
       "start": "node app.js"
     },
     "author": "",
     "license": "ISC",
     "dependencies": {
       "actions-on-google": "^1.1.1",
       "body-parser": "^1.17.2",
       "express": "^4.15.3",
       "request": "^2.81.0"
     }
    }
    
  6. Copy following code into file you created i.e index.js
    // Enable debugging
    process.env.DEBUG = 'actions-on-google:*';
    
    var Assistant = require('actions-on-google').ApiAiApp;
    var express = require('express');
    var bodyParser = require('body-parser');
    
    //defining app and port
    var app = express();
    app.set('port', (process.env.PORT || 8080));
    app.use(bodyParser.json());
    
    const SUSI_INTENT = 'name-of-action-you defined-in-your-intent';
    
    app.post('/webhook', function (request, response) {
     //console.log('headers: ' + JSON.stringify(request.headers));
     //console.log('body: ' + JSON.stringify(request.body));
    
     const assistant = new Assistant({request: request, response: response});
    
     function responseHandler (app) {
       // intent contains the name of the intent you defined in the Actions area of API.AI
       let intent = assistant.getIntent();
       switch (intent) {
         case SUSI_INTENT:
           var query = assistant.getRawInput(SUSI_INTENT);
           console.log(query);
    
           var options = {
           method: 'GET',
           url: 'http://api.asksusi.com/susi/chat.json',
           qs: {
               timezoneOffset: '-330',
               q: query
             }
           };
    
           var request = require('request');
           request(options, function(error, response, body) {
           if (error) throw new Error(error);
    
           var ans = (JSON.parse(body)).answers[0].actions[0].expression;
           assistant.ask(ans);
    
           });
           break;
    
       }
     }
     assistant.handleRequest(responseHandler);
    });
    
    // For starting the server
    var server = app.listen(app.get('port'), function () {
     console.log('App listening on port %s', server.address().port);
    });
    
  7. Now we will make a project on developer’s console of Actions on Google and after that, you will set up action on your project using API.AI
  8. Above step will open API.AI console create an agent there for the project you created above. 
  9. Now we have an agent. In order to create SUSI action on Google, we will define an “intent” from options in the left menu on API.AI, but as we have to receive responses from SUSI API so we have to set up webhook first.

     
  10. See “fulfillment” option in the left menu. Open to enable it and to enter the URL. We have to deploy the above-written code onto Heroku, but first, make a Github repository and push the files in the folder we created above.

    In command line change current directory to folder we created above and  write
    git init
    git add .
    git commit -m”initial”
    git remote add origin <URL for remote repository>
    git remote -v
    git push -u origin master 

    You will get URL for the remote repository by making a repository on your Github and copying this link of your repository.


  11. Now we have to deploy this Github repository to Heroku to get URL. 
  12. If you don’t have an account on Heroku sign up here https://www.heroku.com/ else just sign in and create a new app. 
  13. Deploy your repository onto Heroku from deploy option and choosing GitHub as a deployment method.
  14. Select automatic deployment so that when you make any changes in the Github repository, they are automatically deployed to Heroku.
  15. Open your app from an option on the top right and copy the link of your Heroku app and append it with /webhook and enter this URL into fulfillment URL.

    https://{Your_App_Name}.herokuapp.com/webhook
     
  16. After setting up webhook we will make an intent on API.AI, which will get what user is asking from SUSI. To create an intent, select “intent” from the left menu and create an intent with the information given in screenshot below and save your intent.

     
  17. After creating the intent, your agent is ready. You just have to integrate your agent with Actions on Google. Turn on its integration from the “integration” option in the left menu. 
  18. Your SUSI action on Google Assistant is ready now. To test it click on actions on Google in integration menu and choose “test” option.

  19. You can only test it with the same email you have used for creating the project in step 7. To test it on Google Assistant follow this demo video https://youtu.be/wHwsZOCKaYY

If you want to learn more about action on Google with API.AI refer to this link https://developers.google.com/actions/apiai/

Resources
Actions on Google: https://developers.google.com/actions/apiai/

Continue ReadingHow to integrate SUSI AI in Google Assistant