Live Feeds in loklak Media wall using ‘source=twitter’

Loklak Server provides pagination to provide tweets from Loklak search.json API in divisions so as to improve response time from the server. We will be taking advantage of this pagination using parameter `source=twitter` of the search.json API on loklak media wall. Basically, using parameter ‘source=twitter’ in the API does real time scraping and provides live feeds. To improve response time, it returns feeds as specified in the count (default is 100).

In the blog, I am explaining how implemented real time pagination using ‘source = twitter’ in loklak media wall to get live feeds from twitter.

Working

First API Call on Initialization

The first API call needs to have high count (i.e. maximumRecords = 20) so as to get a higher number of feeds and provide a sufficient amount of feeds to fill up the media wall. ‘source=twitter’ must be specified so that real time feeds are scraped and provided from twitter.

http://api.loklak.org/api/search.json?q=fossasia&callback=__ng_jsonp__.__req0.finished&minified=true&source=twitter&maximumRecords=20&timezoneOffset=-330&startRecord=1

 

If feeds are received from the server, then the next API request must be sent after 10 seconds so that server gets sufficient time to scrap the data and store it in the database. This can be done by an effect which dispatches WallNextPageAction(‘’) keeping debounceTime equal to 10000 so that next request is sent 10 seconds after WallSearchCompleteSuccessAction().

@Effect()
nextWallSearchAction$
= this.actions$
.ofType(apiAction.ActionTypes.WALL_SEARCH_COMPLETE_SUCCESS)
.debounceTime(10000)
.withLatestFrom(this.store$)
.map(([action, state]) => {
return new wallPaginationAction.WallNextPageAction();
});

Consecutive Calls

To implement pagination, next consecutive API call must be made to add new live feeds to the media wall. For the new feeds, count must be kept low so that no heavy pagination takes place and feeds are added one by one to get more focus on new tweets. For this purpose, count must be kept to one.

this.searchServiceConfig.count = queryObject.count;
this.searchServiceConfig.maximumRecords = queryObject.count;return this.apiSearchService.fetchQuery(queryObject.query.queryString, this.searchServiceConfig)
.takeUntil(nextSearch$)
.map(response => {
return new wallPaginationAction.WallPaginationCompleteSuccessAction(response);
})
.catch(() => of(new wallPaginationAction.WallPaginationCompleteFailAction()));
});

 

Here, count and maximumRecords is updated from queryObject.count which varies between 1 to 5 (default being 1). This can be updated by user from the customization menu.

Next API request is as follows:

http://api.loklak.org/api/search.json?q=fossasia&callback=__ng_jsonp__.__req2.finished&minified=true&source=twitter&maximumRecords=1&timezoneOffset=-330&startRecord=1

 

Now, as done above, if some response is received from media wall, next request is sent after 10 seconds after WallPaginationCompleteSuccess() from an effect by keeping debounceTime equal to 10000.

In the similar way, new consecutive calls can be made by keeping ‘source = twitter’ and keeping count low for getting a proper focus on new feed.

Reference

Continue ReadingLive Feeds in loklak Media wall using ‘source=twitter’

Implementing Advanced Search Feature In Susper

Susper has been provided ‘Advanced Search’ feature which provides the user a great experience to search for desired results. Advanced search has been implemented in such a way it shows top authors, top providers, and distribution regarding protocols. Users can choose any of these options to get best results.

We receive data of each facet name from Yacy using yacy search endpoint. More about yacy search endpoint can be found here:  http://yacy.searchlab.eu/solr/select?query=india&fl=last_modified&start=0&rows=15&facet=true&facet.mincount=1&facet.field=host_s&facet.field=url_protocol_s&facet.field=author_sxt&facet.field=collection_sxt&wt=yjson

For implementing this feature, we created Actions and Reducers using concepts of Redux. The implemented actions can be found here: https://github.com/fossasia/susper.com/blob/master/src/app/actions/search.ts

Actions have been implemented because these actually represent some kind of event. For e.g. like the beginning of an API call here.

We also have created an interface for search action which can be found here under reducers as filename index.ts: https://github.com/fossasia/susper.com/blob/master/src/app/reducers/index.ts

Reducers are a pure type of function that takes the previous state and an action and returns the next state. We have used Redux to implement actions and reducers for the advanced search.

For advanced search, the reducer file can be found here: https://github.com/fossasia/susper.com/blob/master/src/app/reducers/search.ts

The main logic has been implemented under advancedsearch.component.ts:

export class AdvancedsearchComponent implements OnInit {
  querylook = {}; // array of urls
  navigation$: Observable<any>;
  selectedelements: Array<any> = []; // selected urls by user
changeurl
(modifier, element) {
// based on query urls are fetched
// if an url is selected by user, it is decoded
  this.querylook[‘query’] = this.querylook[‘query’] + ‘+’ + decodeURIComponent(modifier);
  this.selectedelements.push(element);
// according to selected urls
// results are loaded from yacy
  this.route.navigate([‘/search’], {queryParams: this.querylook});
}

// same method is implemented for removing an url
removeurl(modifier) {
  this.querylook[‘query’] = this.querylook[‘query’].replace(‘+’ + decodeURIComponent(modifier), );

  this.route.navigate([‘/search’], {queryParams: this.querylook});
}

 

The changeurl() function replaces the query with a query and selected URL and searches for the results only from the URL provider. The removeurl() function removes URL from the query and works as a normal search, searching for the results from all providers.

The source code for the implementation of advanced search feature can be found here: https://github.com/fossasia/susper.com/tree/master/src/app/advancedsearch

Resources

Continue ReadingImplementing Advanced Search Feature In Susper

Adding Map and RSS Action Type Support to SUSI MagicMirror Module with React

SUSI being an interactive personal assistant, answers questions in a variety of formats. This includes maps, RSS, table, and pie-chart. SUSI MagicMirror Module earlier provided support for only Answer Action Type. So, if you were to ask about a location, it could not show you a map for that location. Support for a variety of formats was added to SUSI Module for MagicMirror so that users can benefit from rich responses by SUSI.AI.

One problem that was faced while adding UI components is that in the MagicMirror Module structure, each module needs to supply its DOM by overriding the getDom() method. Therefore, you need to manage all the UI programmatically. Managing UI programmatically in Javascript is a cumbersome task since you need to create DOM nodes, manually apply styling to them, and add them to parent DOM object which is needed to be returned. We need to write UI for each element like below:

getDom: function () {
        .... 
        ....
        const moduleDiv = document.createElement("div");

        const visualizerCanvas = document.createElement("canvas");
        moduleDiv.appendChild(visualizerCanvas);

        const mapDiv = document.createElement("div");
        loadMap(mapDiv,lat, long);
        moduleDiv.appendChild(mapDiv);
        ...
        ...
}

As you can see, manually managing the DOM is neither that easy nor a recommended practice. It can be done in a more efficient way using the React Library by Facebook.  React is an open source UI library by Facebook. It works on the concept of Virtual DOM i.e. the whole DOM object gets created in the memory and only the changed components are reflected on the document.

Since the SUSI MagicMirror Module is primarily written in open-source TypeScript Lang (a typed superset of JavaScript), we also need to write React in TypeScript. To add React to a Typescript Project, we need to add some dependencies. They can be added using:

$ yarn add react react-dom @types/react @types/react

Now, we need to change our Webpack config to build .tsx files for React. TSX like JSX can contain HTML like syntax for representing DOM object in a syntactic sugar form. This can be done by changing resolve extensions and loaders config so that awesome typescript loaded compiles that TSX files. It is needed to be modified like below

resolve: {
   extensions: [".js", ".ts", ".tsx", ".jsx"],
},

module: {
   loaders: [{
       test: /\.tsx?$/,
       loaders: ["awesome-typescript-loader"],
   },
       {
           test: /\.json$/,
           loaders: ["json-loader"],
       }],
},

This will allow webpack to build and load both .tsx and .ts files. Now that project is setup properly, we need to add UI for Map and RSS Action Type.

The UI for Map is added with the help of React-Leaflet library. React-Leaflet module is a module build on top of Leaflet Map library for loading maps in Browser. We add the React-Leaflet library using

$ yarn add react-leaflet

Now, we declare a MapView Component in React and render Map in it using the React-Leaflet Library. Custom styling can be applied to it. The render function for MapView React Component is defined as follows.

import * as React from "react";
import {Map, Marker, Popup, TileLayer} from "react-leaflet";
interface IMapProps {
   latitude: number;
   longitude: number;
   zoom: number;
}

export class MapView extends React.Component<IMapProps, any> {

   public constructor(props: IMapProps) {
       super(props);
   }

   public render(): JSX.Element | any | any {
       const center = [this.props.latitude, this.props.longitude];
       console.log(center);
       return <Map center={center} zoom={this.props.zoom} style={{height: "300px"}}>
           <TileLayer url="http://{s}.tile.osm.org/{z}/{x}/{y}.png"/>
           <Marker position={center}>
               <Popup>
                   <span> Here </span>
               </Popup>
           </Marker>
       </Map>;
   }
}

For making the UI for RSS Action Type, we define an RSS Card Component. An RSS feed is constituted by various RSS Cards. An RSS Card is defined as follows.

import * as React from "react";

export interface IRssProps {
   title: string;
   description: string;
   link: string;
}

export class RSSCard extends React.Component <IRssProps, any> {

   constructor(props: IRssProps) {
       super(props);
   }

   public render(): JSX.Element | any | any {
       return <div className="card">
           <div className="card-title">{this.props.title}</div>
           <div className="card-description">{this.props.description}</div>
       </div>;
   }
}

Now, we define an RSS feed which is constituted by various RSS Information Cards. Since screen size is limited and there is no option available to the user to scroll, we limit the number of cards displayed to 5 with slice operation on data array.

import * as React from "react";
import {IRssProps, RSSCard} from "./rss-card";

export interface IRSSFeedProps {
   feeds: Array<IRssProps>;
}

export class RSSFeed extends React.Component <IRSSFeedProps, any> {

   public constructor(props: IRSSFeedProps) {
       super(props);
   }

   public render(): JSX.Element | any | any {
       return <div className="rss-div">
           {this.props.feeds.map((feed: IRssProps) => {
                   return <RSSCard key={feed.title} title={feed.title} description={feed.description} link={feed.link}/>;
               }
           ).slice(0, 5)}
       </div>;
   }
}

Now, we can add these components to UI easily and render it with ReactDOM like:

ReactDOM.render(<TableView data={tableData} columns={action.columns}/>, tableDiv);

Below is an example screenshot of RSS and Map View in SUSI MagicMirror.

Resources:

Continue ReadingAdding Map and RSS Action Type Support to SUSI MagicMirror Module with React

Adding Face Recognition based Authentication to SUSI MagicMirror Module

SUSI MagicMirror Module is a module designed for MagicMirror that helps you get SUSI Intelligence right on your Mirror. You may then ask it questions in the way the Queen in the tale “Snow White and the Seven Dwarfs” asked. One key feature that was missing in it was that the user could be recognized and queries he asked are answered in a personalized manner. This could be achieved if SUSI uses the account dedicated to that person to answer his/her queries. Thus, we need an authentication support.

The authentication on MagicMirror is not as trivial as on Web, Android and iOS client apps for SUSI. Key difference here is that user, while using the MagicMirror, does not have access to a keyboard and mouse. Therefore, we cannot simply ask him to input email and password. Furthermore, a MagicMirror installed in your home may be used by several members of your family. Thus, we need a mechanism to tell each user apart.

This was done with the help of MMM-Facial-Recognition module which brings face recognition support to MagicMirror.

MMM-Facial-Recognition module provides support for recognizing multiple faces and setting the modules on the mirror screen based on the user facing the mirror using OpenCV. Other modules can also take advantage of knowing about the person with the help of module notifications sent by MMM-Facial-Recognition Module.

To add Face based Authentication support to SUSI with MMM-Facial-Recognition, we first need to add the latter to MagicMirror. It can be added easily by first cloning the repository to modules directory of MagicMirror.

$ git clone https://github.com/paviro/MMM-Facial-Recognition

Go inside the directory and install dependencies

$ npm run install

Now, we need to train a model for the users who are going to use the MagicMirror. This can be done by the MMM-Facial-Recognition-Tools. This tool captures photos from the camera and trains a model for Face Recognition. The guide to use the tool is very well written on the Github page so I am not including it here. After training for faces of the users, you will get a training.xml file. This file contains the information about the facial features of every person so that it can tell users apart. You need to copy this file to the Module directory for MMM-Facial-Recognition module i.e. MagicMirror/module/MMM-Facial-Recognition.

After this we can add the module to MagicMirror, by modifying the config file. Add the following lines in the config file (config.js). Copy and paster username array from the training script in the asked position.

{
    module: 'MMM-Facial-Recognition',
    config: {
        // 1=LBPH | 2=Fisher | 3=Eigen
        recognitionAlgorithm: 1,
        lbphThreshold: 50,
        fisherThreshold: 250,
        eigenThreshold: 3000,
        useUSBCam: true,
        trainingFile: 'modules/MMM-Facial-Recognition/training.xml',
        interval: 2,
        logoutDelay: 15,
        // Array with usernames (copy and paste from training script)
        users: [],
        defaultClass: "default",
        everyoneClass: "everyone",
        welcomeMessage: true
    }
}

You may configure the show and hide behavior of modules based on the person. Find more information about it in the official guide on the repository. After setting up it recognizes and shows welcome message to each user like this.

 

Now, we need to integrate this module to SUSI for Authentication. To do this first of all we make config for SUSI MagicMirror Module to add user authentication along with their name registered on Facial Recognition Module. It can be done by adding SUSI MagicMirror module config file (config.js) like below.

{
       module: "MMM-SUSI-AI",
       position: "top_center",
       config: {
            hotword: "Susi",
            users: [{
                face_recognition_username: "Pranjal Paliwal",
                email: "paliwal.pranjal83@gmail.com",
                password: "PASSWORD_HERE"
            }, {
                face_recognition_username: "Chashmeet Singh",
                email: "chashmeetsingh@gmail.com",
                password: "PASSWORD_HERE"
            }],
        },
        classes: 'default everyone'
},

Now, we need to know that which user is facing the mirror at that time. MMM-Facial-Recognition sends a module notification when a user is detected. The format of the notification is

sender : MMM-Facial-Recognition
type: CURRENT_USER
payload: Name of the User / None 

If the user is recognized we get the name of the User as payload. If no face could be identified, we get None as payload.

We need to find out user based on the user’s name registered in the module. We already have that parameter in the user object in users array in config for SUSI MagicMirror Module (MMM-SUSI-AI). We can iterate over users array to find out the user facing the mirror on receiving the notification. In SUSI Chat API, users are identified with the help of an access token. On identifying a user, we perform login with the help of SignInService to obtain token for him. The implementation of the above task can be understood via the following snippet.

public receivedNotification(type: NotificationType, payload: any): void {
   if (type === "CURRENT_USER") {
       console.log("Current User", payload);
       if (payload === "None") {
           this.configService.Config.accessToken = null;
       } else {
           console.log(this.config.users);
           for (const user of this.config.users) {
               if (user.face_recognition_username === payload) {
                   if (isUndefined(this.signInService)) {
                       this.signInService = new SignInService(user);
                   }
                   this.signInService.updateUser(user).then((token) => {
                       console.log("updating token for " + user);
                       this.configService.Config.accessToken = token;
                   });
                   return;
               }
           }
           this.configService.Config.accessToken = null;
       }
   }
}

Explanation: In the receivedNotification method of the Main Component of SUSI MagicMirror module, we check if notification is of type CURRENT_USER. If the payload is None, we set access-token to null. If a user is identified, we check if it is contained in the users array. If present, we perform Sign In to SUSI Server for that user and store the access token obtained in the Config.

Now, every time a recognized my Facial Recognition module, the access token is updated in the config. We use the accessToken field in Config to send the message to SUSI Chat API. The implementation of it can be referred below.

public async askSusi(query: string): Promise<any> {

   const accessToken = this.configService.Config.accessToken;

   const requestString: string = (!isUndefined(accessToken) && accessToken != null) ?
       `http://api.susi.ai/susi/chat.json?q=${query}&access_token=${accessToken}` :
       `http://api.susi.ai/susi/chat.json?q=${query}`;

   const response = await WebRequest.get(requestString);
   return JSON.parse(response.content);
}

By using the above approach, the request sent to SUSI Server are identified according to the person facing the mirror. SUSI can, therefore, answer according to the user. In this way, authentication with Face Recognition is performed in the SUSI Magic Mirror Module.

Resources

 

Continue ReadingAdding Face Recognition based Authentication to SUSI MagicMirror Module

Making Customized and Mobile Responsive Drop-down Menus in Susper using Angular

In  Susper, the drop-down menu is customized with colorful search icons and we wanted to maintain the same menu for mobile screens too, however the drop-down menu disappeared for all screens with width less than 767px. This blog can be used to learn how to create css classes for such drop-down menus without using any bootstrap.
This is how the issue was solved.

  1. Replacing standard bootstrap classes : The drop-down menu blocks had a source code as follows:

class=“dropdown-menu”>

class=“row”>

class=“col-sm-4”>

class=“block”>

</div>
</div>

Using col-sm-4 will do the following

  • For widths greater than 767px: Divide each row into four equally sized columns.
  • For widths smaller than 767px: Stack all the columns on top of each other.

Since the drop-down menu’s design was to remain intact, I made the following changes:

  • Replace row with menu-row
  • Replace col-sm-4 with menu-item

Now I wrote personalized css for these classes.

.menu-row{
width: 267px;
gridtemplatecolumns: 1fr 1fr 1fr;
background-color: white;
}
.menu-item{
display: inlineblock;
width: 86px;
}
  • Width: It is used to set the width of the div class, each row now has a width of 267px, with each column in it having a width of 86px.
  • Grid-template-columns: It is used to layout the structure of the template, here 1fr 1fr 1fr represents that there will be three columns in a row.
  • Display: The display is set to inline block to overwrite the default property of the div element to start in a new line.
  1. Custom css for small screens : In standard bootstrap, for screen sizes less than 767px, dropdown class has properties like transparent background, no border etc. that need to be over written. So we add a new id for the div tag as shown:

<div id=“small-drop” class=“dropdown-menu”>

/** Now we add css for it, as shown: **/
@media screen and (max-width: 767px) {
#small-drop{
position: absolute;
background-color: white;
border: 1px solid #cccccc;
right: -38px;
left: auto;
}

  • Position : absolute is used to make sure all our values are absolute and not relative to the higher div hierarchically
  • Border: The values for the border represent the following respectively: Thickness, Style and Color.
  • Auto: Here the value auto for left signifies that there is no fixed value for the left margin, it can take the default value

References:

  1. For working of grids in Bootstrap: https://www.w3schools.com/bootstrap/bootstrap_grid_examples.asp
  2. A useful article for difference between id and class: https://css-tricks.com/the-difference-between-id-and-class

 

Continue ReadingMaking Customized and Mobile Responsive Drop-down Menus in Susper using Angular

Making Autocomplete Box Compatible with the Search Bar using Angular in Susper

A major problem in Susper was that we were using the same components on different pages, with different styling properties. A major issue was that the Autocomplete box was not properly aligned in the index page and looked like this:

This was happening because the autocomplete box width was set for 634 px, a width perfect for the search bar in the results page. The index page had a search bar of width 534 px, and the autocomplete box was too large for that.
Here is how the issue was solved:

  1. Changing the suggestion box html code:

id=“sug-box” class=“suggestion-box” *ngIf=”results.length0”>

</div>

The code uses *ngIf which is why setting the autocomplete box width using the typescript files becomes impossible. *ngIf does not load the component into the DOM until there are results, hence we didnot have the autocomplete box in the DOM until after the query was typed in the search bar. That was why we could not set its width, hence it was decided to remove this attribute. Using the ‘hidecomponent emitter’ is a better option here (used in the typescript file).

@Output() hidecomponent: EventEmitter<any> = new EventEmitter<any>();
if (this.results.length === 0) {
this.hidecomponent.emit(1);
} else {
this.hidecomponent.emit(0);
}

See autocomplete.component.ts for the complete code.

  1. It is now required to dynamically change the id of the suggestion-box depending on the page it is on, and apply the correct CSS.

Here is the html code:

<div [id]=”getID()” class=“suggestion-box” *ngIf=“results”>

The id of the suggestion box will now depend on the value returned from the function getID(), defined as follows:

getID() {
if ( this.route.url.toString() === ‘/’) {
  return ‘index-sug-box’;
} else {
  return ‘sug-box’
}
}
  • We first check if the route url is simply ‘/’ (which implies it is in the index page).
  • If yes the id is set to index-sug-box otherwise to sug-box.

Now we can write extra CSS properties for the index-sug-box id as follows:

#index-sug-box{
width: 586px;
}

References:

  1. For basic javascript functions: https://www.w3schools.com/js/js_functions.asp
  2. To understand components in Angular: https://angular.io/api/core/Component

 

Continue ReadingMaking Autocomplete Box Compatible with the Search Bar using Angular in Susper

Implementing Sort By Date Feature In Susper

 

Susper has been given ‘Sort By Date’ feature which provides the user with latest results with the latest date. This feature enhances the search experience and helps users to find desired results more accurately. The sorting of results date wise is done by yacy backend which uses Apache Solr technology.

The idea was to create a ‘Sort By Date’ feature similar to the market leader. For example, if a user searches for keyword ‘Jaipur’ then results appear to be like this:

If a user wishes to get latest results, they can use ‘Sort By Date’ feature provided under ‘Tools’.

The above screenshot shows the sorted results.

You may however notice that results are not arranged year wise. Currently, the backend work for this is being going on Yacy and soon will be implemented on the frontend as well once backend provide us this feature.

Under ‘Tools’ we created an option for ‘Sort By Date’ simply using <li> tag.

<ul class=dropdownmenu>
  <li (click)=filterByDate()>Sort By Date</li>
</ul>

When clicked, it calls filterByDate() function to perform the following task:

filterByDate() {
  let urldata = Object.assign({}, this.searchdata);
  urldata.query = urldata.query.replace(/date, “”);
  this.store.dispatch(new queryactions.QueryServerAction(urldata));
}
Earlier we were using ‘last_modified desc’ attribute provided by Solr for sorting out dates in descending order. In June 2017, this feature was deprecated with a new update of Solr. We are using /date attribute in query for sorting out results which is being provided by Solr.

 

Continue ReadingImplementing Sort By Date Feature In Susper

Implementation of Text-To-Speech Feature In Susper

Susper has been given a voice search feature through which it provides the user a better experience of search. We introduced to enhance the speech recognition by adding Speech Synthesis or Text-To-Speech feature. The speech synthesis feature should only work when a voice search is attempted.

The idea was to create speech synthesis similar to market leader. Here is the link to YouTube video showing the demo of the feature: Video link

In the video, it will show demo :

  • If a manual search is used then the feature should not work.
  • If voice search is used then the feature should work.

For implementing this feature, we used Speech Synthesis API which is provided with Google Chrome browser 33 and above versions.

window.speechSynthesis.speak(‘Hello world!’); can be used to check whether the browser supports this feature or not.

First, we created an interface:

interface IWindow extends Window {
  SpeechSynthesisUtterance: any;
  speechSynthesis: any;
};

 

Then under @Injectable we created a class for the SpeechSynthesisService.

export class SpeechSynthesisService {
  utterence: any;  constructor(private zone: NgZone) { }  speak(text: string): void {
  const { SpeechSynthesisUtterance }: IWindow = <IWindow>window;
  const { speechSynthesis }: IWindow = <IWindow>window;  this.utterence = new SpeechSynthesisUtterance();
  this.utterence.text = text; // utters text
  this.utterence.lang = ‘en-US’; // default language
  this.utterence.volume = 1; // it can be set between 0 and 1
  this.utterence.rate = 1; // it can be set between 0 and 1
  this.utterence.pitch = 1; // it can be set between 0 and 1  (window as any).speechSynthesis.speak(this.utterence);
}// to pause the queue of utterence
pause(): void {
  const { speechSynthesis }: IWindow = <IWindow>window;
const { SpeechSynthesisUtterance }: IWindow = <IWindow>window;
  this.utterence = new SpeechSynthesisUtterance();
  (window as any).speechSynthesis.pause();
 }
}

 

The above code will implement the feature Text-To-Speech.

The source code for the implementation can be found here: https://github.com/fossasia/susper.com/blob/master/src/app/speech-synthesis.service.ts

We call speech synthesis only when voice search mode is activated. Here we used redux to check whether the mode is ‘speech’ or not. When the mode is ‘speech’ then it should utter the description inside the infobox.

We did the following changes in infobox.component.ts:

import { SpeechSynthesisService } from ../speechsynthesis.service;

speechMode: any;

constructor(private synthesis: SpeechSynthesisService) { }

this.query$ = store.select(fromRoot.getwholequery);
this.query$.subscribe(query => {
  this.keyword = query.query;
  this.speechMode = query.mode;
});

 

And we added a conditional statement to check whether mode is ‘speech’ or not.

// conditional statement
// to check if mode is ‘speech’ or not
if (this.speechMode === speech) {
  this.startSpeaking(this.results[0].description);
}
startSpeaking(description) {
  this.synthesis.speak(description);
  this.synthesis.pause();
}

 

 

 

Continue ReadingImplementation of Text-To-Speech Feature In Susper

Reducing Initial Load Time Of Susper

Susper used to take long time to load the initial page. So, there was a discussion on how to decrease the initial loading time of Susper.

Later on going through issues raised in official Angular repository regarding the time takento load angular applications, we found some of the solutions. Those include:

  • Migrating from development to production during build:
    • This shrinks vendor.js and main.js by minimising them and also removing all packages that are not required on production.
    • Enables inline html and extracting CSS.
    • Disables source maps.
  • Enable Ahead of Time (AoT) compilation.
  • Enable service workers.

After these changes we found the following changes in Susper:

File Name Before (content-length) After
vendor.js 709752 216764
main.js 56412 138361

LOADING TIMES GRAPHS:

After:

Vendor file:

Main.js

Before:

Vendor file:

Main.js

Also we could see that all files are now initiated by service worker:

More about Service Workers could be read at Mozilla and Service Workers in Angular.

Implementation:

While deploying our application, we have added –prod and –aot as extra attributes to ng build .This enables angular to use production mode and ahead of time compilation.

For service workers we have to install @angular/service-worker module. One can install it by using:

npm install @angular/service-worker --save
ng set apps.0.serviceWorker=true

The whole implementation of this is available at this pull:

https://github.com/fossasia/susper.com/pull/597/files

Resources:

 

 

Continue ReadingReducing Initial Load Time Of Susper

Adding Color Options in loklak Media Wall

Color options in loklak media wall gives user the ability to set colors for different elements of the media wall. Taking advantage of Angular two-way data binding property and ngrx/store, we can link up the CSS properties of the elements with concerned state properties which stores the user-selected color. This makes color customization fast and reactive for media walls.

In this blog here, I am explaining the unidirectional workflow using ngrx for updating various colors and working of color customization.

Flow Chart

The flowchart below explains how the color as a string is taken as an input from the user and how actions, reducers and component observables link up to change the current CSS property of the font color.

Working

Designing Models: It is important at first to design model which must contain every CSS color property that can be customized. A single interface for a particular HTML element of media wall can be added so that color customization for a particular element can take at once with faster rendering. Here we have three interfaces:

  • WallHeader
  • WallBackground
  • WallCard

These three interfaces are the models for the three core components of the media wall that can be customized.

export interface WallHeader {
backgroundColor: string;
fontColor: string;
}
export interface WallBackground {
backgroundColor: string;
}
export interface WallCard {
fontColor: string;
backgroundColor: string;
accentColor: string;
}

 

Creating Actions: Next step is to design actions for customization. Here we need to pass the respective interface model as a payload with updated color properties. These actions when dispatched causes reducer to change the respective state property, and hence, the linked CSS color property.

export class WallHeaderPropertiesChangeAction implements Action {
type = ActionTypes.WALL_HEADER_PROPERTIES_CHANGE;constructor(public payload: WallHeader) { }
}
export class WallBackgroundPropertiesChangeAction implements Action {
type = ActionTypes.WALL_BACKGROUND_PROPERTIES_CHANGE;constructor(public payload: WallBackground) { }
}
export class WallCardPropertiesChangeAction implements Action {
type = ActionTypes.WALL_CARD_PROPERTIES_CHANGE;constructor(public payload: WallCard) { }
}

 

Creating reducers: Now, we can proceed to create reducer functions so as to change the current state property. Moreover, we need to define an initial state which is the default state for uncustomized media wall. Actions can now be linked to update state property using this reducer when dispatched. These state properties serve two purposes:

  • Updating Query params for Direct URL.
  • Updating Media wall Colors

case mediaWallCustomAction.ActionTypes.WALL_HEADER_PROPERTIES_CHANGE: {
const wallHeader = action.payload;return Object.assign({}, state, {
wallHeader
});
}case mediaWallCustomAction.ActionTypes.WALL_BACKGROUND_PROPERTIES_CHANGE: {
const wallBackground = action.payload;return Object.assign({}, state, {
wallBackground
});
}case mediaWallCustomAction.ActionTypes.WALL_CARD_PROPERTIES_CHANGE: {
const wallCard = action.payload;return Object.assign({}, state, {
wallCard
});
}

 

Extracting Data to the component from the store: In ngrx, the central container for states is the store. Store is itself an observable and returns observable related to state properties. We have already defined various states for media wall color options and now we can use selectors to return state observables from the store. These observables can now easily be linked to the CSS of the elements which changes according to customization.

private getDataFromStore(): void {
this.wallCustomHeader$ = this.store.select(fromRoot.getMediaWallCustomHeader);
this.wallCustomCard$ = this.store.select(fromRoot.getMediaWallCustomCard);
this.wallCustomBackground$ = this.store.select(fromRoot.getMediaWallCustomBackground);
}

 

Linking state observables to the CSS properties: At first, it is important to remove all the CSS color properties from the elements that need to be customized. Now, we will instead use style directive provided by Angular in the template which can be used to update CSS properties directly from the component variables. Since the customized color received from the central store are observables, we need to use the async pipe to extract string color data from it.

Here, we are updating background color of the wall.

<span class=“wrapper”
[style.background-color]=“(wallCustomBackground$ | async).backgroundColor”>
</span>

 

For other child components, we need to use @Input Decorator to send color data as an input to it and use the style directive as used above.

Here, we are interacting with the child component i.e. media wall card component using @Input Decorator.

Template:

<media-wall-card
[feedItem]=“item”
[wallCustomCard$]=“wallCustomCard$”></media-wall-card>

 

Component:

export class MediaWallCardComponent implements OnInit {
..
@Input() feedItem: ApiResponseResult;
@Input() wallCustomCard$: Observable<WallCard>;
..
}

 

This creates a perfect binding of CSS properties in the template with the state properties of color actions. Now, we can dispatch different actions to update the state and hence, the colors of media wall.

Reference

Continue ReadingAdding Color Options in loklak Media Wall