Query Model Structure of Loklak Search

Need to restructure

The earlier versions of loklak search applications had the issues of breaking changes whenever any new feature was added in the application. The main reason for these unintended bugs was identified to be the existing query structure. The query structure which was used in the earlier versions of the application only comprised of the single entity a string named as queryString.

export interface Query {
 queryString: string;
}

This simple query string property was good enough for simple string based searches which were the goals of the application in the initial phases, but as the application progressed we realized this simple string based implementation is not going to be feasible for the long term. As there are only a limited things we can do with strings. It becomes extremely difficult to set and reset the portions of the string according to the requirements. This was the main vision for the alternate architecture design scheme was to the ease of enabling and disabling the features on the fly.

Application Structure

Therefore, to overcome the difficulties faced with the simple string based structure we introduced the concept of an attribute based structure for queries. The attribute based structure is simpler to understand and thus easier to maintain the state of the query in the application.

export interface Query {
 displayString: string;
 queryString: string;
 routerString: string;
 filter: FilterList;
 location: string;
 timeBound: TimeBound;
 from: boolean;
}

The reason this is called an attribute based structure is that here each property of an interface is independent of one another. Thus each one can be thought of as a separate little key placed on the query, but each of these keys are different and are mutually exclusive. What this means is, if I want to write an attribute query, then it does not matter to me which other attributes are already present on the query. The query will eventually be processed and sent to the server and the corresponding valid results if exists will be shown to the user.

Now the question arises how do we modify the values of these attributes? Now before answering this I would like to mention that this interface is actually instantiated in the the Redux state, so now our question automatically gets answered, the modification to redux state corresponding to the query structure will be done by specific reducers meant for modification of each attribute. These reducers are again triggered by corresponding actions.

export const ActionTypes = {
 VALUE_CHANGE: '[Query] Value Change',
 FILTER_CHANGE: '[Query] Filter Change',
 LOCATION_CHANGE: '[Query] Location Change',
 TIME_BOUND_CHANGE: '[Query] Time Bound Change',
};

This ActionTypes object contains the the corresponding actions which are used to trigger the reducers. These actions can be dispatched in response to any user interaction by any of the components, thus modifying a particular state attribute via the reducer.

Converting from object to string

Now for our API endpoint to understand our query we need to send the proper string in API accepted format. For this there is need to convert dynamically from query state to query string, for this we need a simple function which take in query state as an input return the query string as output.

export function parseQueryToQueryString(query: Query): string {
 let qs: string;
 qs = query.displayString;
 if (query.location) {
qs += ` near:${query.location);

 if (query.timeBound.since) {
   qs += ` since:${parseDateToApiAcceptedFormat(query.timeBound.since)}`;

 if (query.timeBound.until) {
   qs += ` until:${parseDateToApiAcceptedFormat(query.timeBound.until)}`;

 return qs;
}

In this function we are just checking and updating the query string according to the various attributes set in the structure, and then returning the query string. So if eventually we have to convert to the string, then what is the advantage of this approach? The main advantage of this approach is that we know the query structure beforehand and we use the structure to build the string not just randomly selecting and removing pieces of information from a string. Whenever we update any of the attribute of the query state, the query is generated fresh, and not modifying the old string.

Conclusion

This approach makes the application to be able to modify the search queries sent to server in a streamlined and logical way, just by using simple data structure. This query model has provided us with a lot of advantages which are visible in the aspect of application stability and performance. This model has cuts out dirty regex matching, of typed queries and thus again help us to make simpler queries.

Resources and Links

Adding unit tests for effects in Loklak Search

Loklak search uses @ngrx/effects to listen to actions dispatched by the user and sending API request to the loklak server. Loklak search, currently, has seven effects such as Search Effects,  Suggest Effects which runs to make the application reactive. It is important to test these effects to ensure that effects make API calls at the right time and then map the response to send it back to the reducer.

I will  explain here how I added unit tests for the effects. Surprisingly, the test coverage increased from 43% to 55% after adding these tests.

Effects to test

We are going to test effects for user search. This effect listens to the event of type USER_SEARCH and makes a call to the user-search service with the query as a parameter. After a response is received, it maps the response and passes it on the UserSearchCompleteSuccessAction action which performs the later operation. If the service fails to get a response, it makes a call to the UserSearchCompleteFailAction.

Code

ApiUserSearchEffects is the effect which detects if the USER_SEARCH action is dispatched from some component of the application and consequently, it makes a call to the UserSearchService and handles the JSON response received from the server. The effects then, dispatch the action new UserSearchCompleteSuccessAction if response is received from server or either dispatch the action new UserSearchCompleteFailAction if no response is received. The debounce time is set to 400 so that response can be flushed if a new USER_SEARCH is dispatched within the next 400ms.

For this effect, we need to test if the effects actually runs when USER_SEARCH action is made. Further, we need to test if the correct parameters are supplied to the service and response is handled carefully. We also, need to check if the response if really flushed out within the certain debounce time limit.

@Injectable()
export class ApiUserSearchEffects {@Effect()
search$: Observable<Action>
= this.actions$
.ofType(userApiAction.ActionTypes.USER_SEARCH)
.debounceTime(400)
.map((action: userApiAction.UserSearchAction) => action.payload)
.switchMap(query => {
const nextSearch$ = this.actions$.ofType(userApiAction.ActionTypes.USER_SEARCH).skip(1);const follow_count = 10;return this.apiUserService.fetchQuery(query.screen_name, follow_count)
.takeUntil(nextSearch$)
.map(response => new userApiAction.UserSearchCompleteSuccessAction(response))
.catch(() => of(new userApiAction.UserSearchCompleteFailAction()));
});constructor(
private actions$: Actions,
private apiUserService: UserService
) { }

}

Unit test for effects

  • Configure the TestBed class before starting the unit test and add all the necessary imports (most important being the EffectsTestingModule) and providers. This step will help to isolate the effects completely from all other components and testing it independently. We also need to create spy object which spies on the method userService.fetchQuery with provider being UserService.

beforeEach(() => TestBed.configureTestingModule({
imports: [
EffectsTestingModule,
RouterTestingModule
],
providers: [
ApiUserSearchEffects,
{
provide: UserService,
useValue: jasmine.createSpyObj(‘userService’, [‘fetchQuery’])
}
]
}));
  • Now, we will be needing a function setup which takes params which are the data to be returned by the Mock User Service. We can now configure the response to returned by the service. Moreover, this function will be initializing EffectsRunner and returning ApiUserSearchEffects so that it can be used for unit testing.

function setup(params?: {userApiReturnValue: any}) {
const userService = TestBed.get(UserService);
if (params) { userService.fetchQuery.and.returnValue(params.userApiReturnValue);
}return {
runner: TestBed.get(EffectsRunner),
apiUserSearchEffects: TestBed.get(ApiUserSearchEffects)
};
}

 

  • Now we will be adding unit tests for the effects. In these tests, we are going to test if the effects recognise the action and return some new action based on the response we want and if it maps the response only after a certain debounce time.We have used fakeAsync() which gives us access to the tick() function. Next, We are calling the function setup and pass on the Mock Response so that whenever User Service is called it returns the Mock Response and never runs the service actually. We will now queue the action UserSearchAction in the runner and subscribe to the value returned by the effects class. We can now test the value returned using expect() block and that the value is returned only after a certain debounce time using tick() block.

it(‘should return a new userApiAction.UserSearchCompleteSuccessAction, ‘ +
‘with the response, on success, after the de-bounce’, fakeAsync(() => {
const response = MockUserResponse;const {runner, apiUserSearchEffects} = setup({userApiReturnValue: Observable.of(response)});

const expectedResult = new userApiAction.UserSearchCompleteSuccessAction(response);

runner.queue(new userApiAction.UserSearchAction(MockUserQuery));

let result = null;
apiUserSearchEffects.search$.subscribe(_result => result = _result);
tick(399); // test debounce
expect(result).toBe(null);
tick(401);
expect(result).toEqual(expectedResult);
}));

it(‘should return a new userApiAction.UserSearchCompleteFailAction,’ +
‘if the SearchService throws’, fakeAsync(() => {
const { runner, apiUserSearchEffects } = setup({ userApiReturnValue: Observable.throw(new Error()) });

const expectedResult = new userApiAction.UserSearchCompleteFailAction();

runner.queue(new userApiAction.UserSearchAction(MockUserQuery));

let result = null;
apiUserSearchEffects.search$.subscribe(_result => result = _result );

tick(399); // Test debounce
expect(result).toBe(null);
tick(401);
expect(result).toEqual(expectedResult);
}));
});

Reference

Introducing Customization in Loklak Media Wall

My GSoC Project includes implementing media wall in loklak search . One part of the issue is to include customization options in media wall. I looked around for most important features that can be implemented in a media wall to give the user a more appealing and personalized view. One of the feature that can be implemented is enabling Full Screen Mode.  This feature can help the user to display media wall on the projector or any big display screen without compromising with space available. In one part, I would be explaining how I implemented Full screen Mode in loklak media wall using fullscreen.js library.

Secondly, it is important to include a very reactive and user-friendly setting box. The setting box should be a central container in which all the customization options will be included.In loklak media wall,  setting box is implemented as a dialog box with various classifications in form of tabs. I would also be explaining  how I designed customization menu using Angular Material.

Implementation

Full Screen Mode

Since loklak search is an Angular 2 application and all the code is written in typescript, we can’t simply use the fullscreen.js library. We have to import the library into our application and create a directive that can be applied to the element to use it in the application.

  • Install fullscreen.js library in the application using Node Package Manager.

npm install save screenfull

import {Directive, HostListener, Output, EventEmitter} from ‘@angular/core’;
import * as screenfull from ‘screenfull’;@Directive({
selector: ‘[toggleFullscreen]’
})
export class ToggleFullscreenDirective {constructor() {}@HostListener(‘click’) onClick() {
if (screenfull.enabled) {
screenfull.toggle();
}
}
}
  • Import Directive into the module and add it to declaration. This allows directive to be used anywhere in the template.

import { ToggleFullscreenDirective } from ‘../shared//full-screen.directive’;
.
.
.
@NgModule({
.
.
.
declarations: [
.
.
.
ToggleFullscreenDirective,
.
.
.
]
})
export class MediaWallModule { }
  • Now, the directive is ready to use on the template. We just have to add this attribute directive to an element.

<i toggleFullscreen mdTooltip=“Full Screen” mdTooltipPosition=“below” class=“material-icons md-36”>fullscreen</i>

Customization Menu

Customization Menu is created using the idea of central container for customization. It is created using two components of Angular Material – Dialog Box and Tabs . We will now be looking how customization menu is implemented using these two components.

  • Create a component with the pre-configured position, height and width of the dialog box. This can be done simply using updatePosition and updateSize property of the MdDialogRef class.

export class MediaWallCustomizationComponent implements OnInit {
public query: string;constructor(
private dialogRef: MdDialogRef<MediaWallCustomizationComponent>,
private store: Store<fromRoot.State>,
private location: Location) { }ngOnInit() {
this.dialogRef.updatePosition(’10px’);
this.dialogRef.updateSize(‘80%’, ‘80%’);
}public searchAction() {
if (this.query) {
this.store.dispatch(new mediaWallAction.WallInputValueChangeAction(this.query));
this.location.go(‘/wall’, `query=${this.query}`);
}
}
}
  • Create a template for the Customization menu. We will be using md-tab and md-dialog to create a dialog box with options displayed using tabs. dynamicHeight should be set to true so that dialog box adjust according to the tabs. We can simply add an attribute md-dialog-close to the button which will close the dialog box. All the content should be added in the div with attribute md-dialog-content linked to it. Moreover, to make options look more user-friendly and adjustable on smaller screens, icons must be added with the Tab title.

<h1 mddialogtitle>Customization Menu</h1>
<button class=“form-close” mddialogclose>x</button>
<span mddialogcontent>
<mdtabgroup color=“accent” dynamicHeight=“true”>
<mdtab>
<ngtemplate mdtablabel>
<mdicon>search</mdicon>
Search For
</ngtemplate>
<h3> Search Customization </h3>
<mdinputcontainer class=“example-full-width” color=“accent”>
<input placeholder=“Search Term” mdInput type =“text” class=“input” name=“search-term” [(ngModel)]=“query”>
</mdinputcontainer>
<span class=“apply-button”>
<button mdraisedbutton color=“accent” mddialogclose (click)=“searchAction()”>Display</button>
</span>
</mdtab>
</mdtabgroup>
</span>

The code currently shows up code for search customization. It basically, records to the input using [(ngModel)] for two-way binding and makes the call the search action whenever user clicks on Display button.

  • Add a button which would open dialog box using open property of MdDialog class. This property would provide an instance for MediaWallCustomizationComponent and the component will show up dynamically.

<i class=“material-icons md-36” (click)=“dialog.open(MediaWallCustomizationComponent)”>settings</i>
  • It is important to add MediaWallCustomizationComponent as an entry component in the module so that AOT compiler can create a ComponentFactory for it during initialization.

import { MediaWallCustomizationComponent } from ‘./media-wall-customization/media-wall-customization.component’;

@NgModule({
entryComponents: [
MediaWallCustomizationComponent
]
})
export class MediaWallModule { }

 

This creates an appealing and user-friendly customization menu which acts a central container for customization options.

References

Implementation of Customizable Instant Search on Susper using Local Storage

Results on Susper could be instantly displayed as user types in a query. This was a strict feature till some time before, where the user doesn’t have customizable option to choose. But now one could turn off and on this feature.

To turn on and off this feature visit ‘Search Settings’ on Susper. This will be the link to it: http://susper.com/preferences and you will find different options to choose from.

How did we implement this feature?

Searchsettings.component.html:

<div>
 <h4><strong>Susper Instant Predictions</strong></h4>
 <p>When should we show you results as you type?</p>
 <input name="options" [(ngModel)]="instantresults" disabled value="#" type="radio" id="op1"><label for="op1">Only when my computer is fast enough</label><br>
 <input name="options" [(ngModel)]="instantresults" [value]="true" type="radio" id="op2"><label for="op2">Always show instant results</label><br>
 <input name="options" [(ngModel)]="instantresults" [value]="false" type="radio" id="op3"><label for="op3">Never show instant results</label><br>
</div>

User is displayed with options to choose from regarding instant search.when the user selects a new option, his selection is stored in the instantresults variable in search settings component using ngModel.

Searchsettings.component.ts:

Later when user clicks on save button the instantresults object is stored into localStorage of the browser

onSave() {
 if (this.instantresults) {
   localStorage.setItem('instantsearch', JSON.stringify({value: true}));
 } else {
   localStorage.setItem('instantsearch', JSON.stringify({ value: false }));
   localStorage.setItem('resultscount', JSON.stringify({ value: this.resultCount }));
 }
 this.router.navigate(['/']);
}

 

Later this value is retrieved from the localStorage function whenever a user enters a query in search bar component and search is made according to user’s preference.

Searchbar.component.ts

Later this value is retrieved from the localStorage function whenever a user enters a query in search bar component and search is made according to user’s preference.

onquery(event: any) {
 this.store.dispatch(new query.QueryAction(event));
 let instantsearch = JSON.parse(localStorage.getItem('instantsearch'));

 if (instantsearch && instantsearch.value) {
   this.store.dispatch(new queryactions.QueryServerAction({'query': event, start: this.searchdata.start, rows: this.searchdata.rows}));
   this.displayStatus = 'showbox';
   this.hidebox(event);
 } else {
   if (event.which === 13) {
     this.store.dispatch(new queryactions.QueryServerAction({'query': event, start: this.searchdata.start, rows: this.searchdata.rows}));
     this.displayStatus = 'showbox';
     this.hidebox(event);
   }
 }
}

 

Interaction of different components here:

  1. First we set the instantresults object in Local Storage from search settings component.
  2. Later this value is retrieved and used by search bar component using localstorage.get() method to decide whether to display results instantly or not.

Below, Gif shows how you could use this feature in Susper to customise the instant results in your browser.

References:

 

Adding Unit Test for Reducer in loklak search

Ngrx/store components are an integral part of the loklak search. All the components are dependent on how the data is received from the reducers. Reducer is like a client-side database which stores up all the data received from the API response. It is responsible for changing the state of the application. Reducers also supplies data to the Angular components from the central Store. If correct data is not received by the components, the application would crash. Therefore, we need to test if the reducer is storing the data in a correct way and changes the state of the application as expected.

Reducer also stores the current state properties which are fetched from the APIs. We need to check if reducers store the data in a correct way and if the data is received from the reducer when called from the angular components.

In this blog, I would explain how to build up different components for unit testing reducers.

Reducer to test

This reducer is used for store the data from suggest.json API from the loklak server.The data received from the server is further classified into three properties which can be used by the components to show up auto- suggestions related to current search query.

  • metadata: – This property stores the metadata from API suggestion response.
  • entities: – This property stores the array of suggestions corresponding to the particular query received from the server.
  • valid: – This is a boolean which keeps a check if the suggestions are valid or not.

We also have two actions corresponding to this reducer. These actions, when called, changes the state properties which , further, supplies data to the components in a more classified manner. Moreover, state properties also causes change in the UI of the component according to the action dispatched.

  • SUGGEST_COMPLETE_SUCCESS: – This action is called when the data is received successfully from the server.
  • SUGGEST_COMPLETE_FAIL: – This action is called when the retrieving data from the server fails.

export interface State {
metadata: SuggestMetadata;
entities: SuggestResults[];
valid: boolean;
}export const initialState: State = {
metadata: null,
entities: [],
valid: true
};export function reducer(state: State = initialState, action: suggestAction.Actions): State {
switch (action.type) {
case suggestAction.ActionTypes.SUGGEST_COMPLETE_SUCCESS: {
const suggestResponse = action.payload;return {
metadata: suggestResponse.suggest_metadata,
entities: suggestResponse.queries,
valid: true
};
}case suggestAction.ActionTypes.SUGGEST_COMPLETE_FAIL: {
return Object.assign({}, state, {
valid: false
});
}default: {
return state;
}
}
}

Unit tests for reducers

  • Import all the actions, reducers and mocks

import * as fromSuggestionResponse from ‘./suggest-response’;
import * as suggestAction from ‘../actions/suggest’;
import { SuggestResponse } from ‘../models/api-suggest’;
import { MockSuggestResponse } from ‘../shared/mocks/suggestResponse.mock’;

 

  • Next, we are going to test if the undefined action doesn’t a cause change in the state and returns the initial state properties. We will be creating an action by const action = {} as any;  and call the reducer by const result = fromSuggestionResponse.reducer(undefined, action);. Now we will be making assertions with expect() block to check if the result is equal to initialState and all the initial state properties are returned

describe(‘SuggestReducer’, () => {
describe(‘undefined action’, () => {
it(‘should return the default state’, () => {
const action = {} as any;const result = fromSuggestionResponse.reducer(undefined, action);
expect(result).toEqual(fromSuggestionResponse.initialState);
});
});

 

  • Now, we are going to test SUGGEST_COMPLETE_SUCCESS and SUGGEST_COMPLETE_FAIL action and check if reducers change only the assigned state properties corresponding to the action in a correct way.  Here, we will be creating action as assigned to the const action variable in the code below. Our next step would be to create a new state object with expected new state properties as assigned to variable const expectedResult below. Now, we would be calling reducer and make an assertion if the individual state properties of the result returned from the reducer (by calling reducer) is equal to the state properties of the expectedResult (Mock state result created to test).

describe(‘SUGGEST_COMPLETE_SUCCESS’, () => {
it(‘should add suggest response to the state’, () => {
const ResponseAction = new suggestAction.SuggestCompleteSuccessAction(MockSuggestResponse);
const expectedResult: fromSuggestionResponse.State = {
metadata: MockSuggestResponse.suggest_metadata,
entities: MockSuggestResponse.queries,
valid: true
};const result = fromSuggestionResponse.reducer(fromSuggestionResponse.initialState, ResponseAction);
expect(result).toEqual(expectedResult);
});
});describe(‘SUGGEST_COMPLETE_FAIL’, () => {
it(‘should set valid to true’, () => {
const action = new suggestAction.SuggestCompleteFailAction();
const result = fromSuggestionResponse.reducer(fromSuggestionResponse.initialState, action);expect(result.valid).toBe(false);
});
});

Reference

Crawl Job Feature For Susper To Index Websites

The Yacy backend provides search results for Susper using a web crawler (or) spider to crawl and index data from the internet. They also require some minimum input from the user.

As stated by Michael Christen (@Orbiter) “a web index is created by loading a lot of web pages first, then parsing the content and placing the result into a search index. The question is: how to get a large list of URLs? This is solved by a crawler: we start with a single web page, extract all links, then load these links and go on. The root of such a process is the ‘Crawl Start’.”

Yacy has a web crawler module that can be accessed from here: http://yacy.searchlab.eu/CrawlStartExpert.html. As we would like to have a fully supported front end for Yacy, we also introduced a crawler in Susper. Using crawler one could tell Yacy what process to do and how to crawl a URL to index search results on Yacy server. To support the indexing of web pages with the help of Yacy server, we had implemented a ‘Crawl Job’ feature in Susper.

1)Visit http://susper.com/crawlstartexpert and give information regarding the sites you want Susper to crawl.Currently, the crawler accepts an input of URLs or a file containing URLs. You could customise crawling process by tweaking crawl parameters like crawling depth, maximum pages per domain, filters, excluding media etc.

2) Once crawl parameters are set, click on ‘Start New Crawl Job’ to start the crawling process.

3) It will raise a basic authentication pop-up. After filling, the user will receive a success alert and will be redirected back to home page.

The process of crawl job on Yacy server will get started according to crawling parameters.

Implementation of Crawler on Susper:

We have created a separate component and service in Susper for Crawler

Source code can be found at:

When the user initiates the crawl job by pressing the start button, it calls startCrawlJob() function from the component and this indeed calls the CrawlStart service.We send crawlvalues to the service and subscribe, to the return object confirming whether the crawl job has started or not.

crawlstart.component.ts:-

startCrawlJob() {
 this.crawlstartservice.startCrawlJob(this.crawlvalues).subscribe(res => {
   alert('Started Crawl Job');
   this.router.navigate(['/']);
 }, (err) => {
   if (err === 'Unauthorized') {
     alert("Authentication Error");
   }
 });
};

 

After calling startCrawlJob() function from the service file, the service file creates a URLSearchParams object to create parameters for each key in input and send it to Yacy server through JSONP request.

crawlstart.service.ts

startCrawlJob(crawlvalues) {
 let params = new URLSearchParams();
 for (let key in crawlvalues) {
   if (crawlvalues.hasOwnProperty(key)) {
     params.set(key, crawlvalues[key]);
   }

 }
 params.set('callback', 'JSONP_CALLBACK');


 let options = new RequestOptions({ search: params });
 return this.jsonp
   .get('http://yacy.searchlab.eu/Crawler_p.json', options).map(res => {
     res.json();
   });

}

Resources:

Multiple Page Rendering on a Single Query in Susper Angular Front-end

Problem: Susper used to render a new results page for each new character input. It should render a single page for the final query as reported in issue 371. For instance, the browser’s back button shows five pages for each of the five characters entered as a query.

Solution: This problem was arising due to code:

this.router.navigate(['/search'], {queryParams: this.searchdata});

Before we have this one line in search-bar component which gets called on each character entry

Fix:To fix this issue we required calling router.navigate only when we receive results and not on each character input.

So, we first removed the line which was cause of this issue from search-bar component and replaced it with

this.store.dispatch(new queryactions.QueryServerAction(query));

 

This triggers a QueryServer action, and make a request to Yacy end point for search results.

Now in app.component.ts , we get subscribed to resultscomponentchange$ which gets called only when new search results are received and hence we navigate to a new page after the resultscomponentchange subscription is called.

this.resultscomponentchange$ = store.select(fromRoot.getItems);
this.resultscomponentchange$.subscribe(res => {
 if (this.searchdata.query.length > 0) {
   this.router.navigate(['/search'], {queryParams: this.searchdata});
 }

});
this.wholequery$ = store.select(fromRoot.getwholequery);
this.wholequery$.subscribe(data => {
 this.searchdata = data;
});
if (localStorage.getItem('resultscount')) {
 this.store.dispatch(new queryactions.QueryServerAction({'query': '', start: 0, rows: 10, search: false}));
}

 

 

Finally, this problem got fixed and now there is only one page being rendered for a valid search. Source code for this implementation is available in this pull.

Resources:

Using RouterLink in the Susper Angular Frontend to Speed up the Loading Time

In Susper, whenever the user clicks on some links, the whole application used to load again, thereby taking more time to load the page. But in Single Page Applications (SPAs) we don’t need to load the whole application. In Fact, SPAs are known to load internal pages faster than traditional HTML web pages. To achieve this we have to inform the application that a link will redirect the user to an internal page. So that the application doesn’t reload completely and reinitializes itself. In angular, this can be done by replacing href with routerLink for the tag.

Routerlink when used with tag syntactically as

<a routerLink="/contact" routerLinkActive="active">Contact</a>

doesn’t load the whole page instead it asks the server for only the contact component and renders it in place of <router-outlet></router-outlet>

This happens through an ajax call to the server asking for only contact component, thereby reducing the time it takes and doesn’t show a whole complete reload of the page.

Below time graph shows requests made when a tag with href was clicked.

If you observe it takes more than 3 seconds to load the page.

But when you use [routerLink] as an attribute for navigation, you find the page being displayed in just a blink.

What we have done in Susper?

In Susper, on issue #167, @mariobehling has noticed that there are some links which are loading slowly. On looking at the issue and a test run of the issue, I found that the problem is with the loading of the whole page, thereby immediately checked with the tag and found that a “href” attribute was used instead of “[routerLink]” angular attribute. I made a pull changing href to “[routerLink]” thereby speeding up Susper to around 3x faster than before.

https://github.com/fossasia/susper.com/pull/234/files

References

Continuous Deployment Implementation in Loklak Search

In current pace of web technology, the quick response time and low downtime are the core goals of any project. To achieve a continuous deployment scheme the most important factor is how efficiently contributors and maintainers are able to test and deploy the code with every PR. We faced this question when we started building loklak search.

As Loklak Search is a data driven client side web app, GitHub pages is the simplest way to set it up. At FOSSASIA apps are developed by many developers working together on different features. This makes it more important to have a unified flow of control and simple integration with GitHub pages as continuous deployment pipeline.

So the broad concept of continuous deployment boils down to three basic requirements

  1. Automatic unit testing.
  2. The automatic build of the applications on the successful merge of PR and deployment on the gh-pages branch.
  3. Easy provision of demo links for the developers to test and share the features they are working on before the PR is actually merged.

Automatic Unit Testing

At Loklak Search we use karma unit tests. For loklak search, we get the major help from angular/cli which helps in running of unit tests. The main part of the unit testing is TravisCI which is used as the CI solution. All these things are pretty easy to set up and use.

Travis CI has a particular advantage which is the ability to run custom shell scripts at different stages of the build process, and we use this capability for our Continuous Deployment.

Automatic Builds of PR’s and Deploy on Merge

This is the main requirement of the our CD scheme, and we do so by setting up a shell script. This file is deploy.sh in the project repository root.

There are few critical sections of the deploy script. The script starts with the initialisation instructions which set up the appropriate variables and also decrypts the ssh key which travis uses for pushing the repo on gh-pages branch (we will set up this key later).

  • Here we also check that we run our deploy script only when the build is for Master Branch and we do this by early exiting from the script if it is not so.
#!/bin/bash

SOURCE_BRANCH="master"
TARGET_BRANCH="gh-pages"

# Pull requests and commits to other branches shouldn't try to deploy.
if [ "$TRAVIS_PULL_REQUEST" != "false" -o "$TRAVIS_BRANCH" != "$SOURCE_BRANCH" ]; then
echo "Skipping deploy; The request or commit is not on master"
exit 0
fi

 

  • We also store important information regarding the deploy keys which are generated manually and are encrypted using travis.
# Save some useful information
REPO=`git config remote.origin.url`
SSH_REPO=${REPO/https:\/\/github.com\[email protected]:}
SHA=`git rev-parse --verify HEAD`

# Decryption of the deploy_key.enc
ENCRYPTED_KEY_VAR="encrypted_${ENCRYPTION_LABEL}_key"
ENCRYPTED_IV_VAR="encrypted_${ENCRYPTION_LABEL}_iv"
ENCRYPTED_KEY=${!ENCRYPTED_KEY_VAR}
ENCRYPTED_IV=${!ENCRYPTED_IV_VAR}
openssl aes-256-cbc -K $ENCRYPTED_KEY -iv $ENCRYPTED_IV -in deploy_key.enc -out deploy_key -d

chmod 600 deploy_key
eval `ssh-agent -s`
ssh-add deploy_key

 

  • We clone our repo from GitHub and then go to the Target Branch which is gh-pages in our case.
# Cloning the repository to repo/ directory,
# Creating gh-pages branch if it doesn't exists else moving to that branch
git clone $REPO repo
cd repo
git checkout $TARGET_BRANCH || git checkout --orphan $TARGET_BRANCH
cd ..

# Setting up the username and email.
git config user.name "Travis CI"
git config user.email "$COMMIT_AUTHOR_EMAIL"

 

  • Now we do a clean up of our directory here, we do this so that fresh build is done every time, here we protect our files which are static and are not generated by the build process. These are CNAME and 404.html
# Cleaning up the old repo's gh-pages branch except CNAME file and 404.html
find repo/* ! -name "CNAME" ! -name "404.html" -maxdepth 1  -exec rm -rf {} \; 2> /dev/null
cd repo

git add --all
git commit -m "Travis CI Clean Deploy : ${SHA}"

 

  • After checking out to our Master Branch we do an npm install to install all our dependencies here and then do our project build. Then we move our files generated by the ng build to our gh-pages branch, and then we make a commit, to this branch.
git checkout $SOURCE_BRANCH
# Actual building and setup of current push or PR.
npm install
ng build --prod --aot
git checkout $TARGET_BRANCH
mv dist/* .
# Staging the new build for commit; and then committing the latest build
git add .
git commit --amend --no-edit --allow-empty

 

  • Now the final step is to push our build files to gh-pages branch and as we only want to put the build there if the code has actually changed, we make sure by adding that check.
# Deploying only if the build has changed
if [ -z `git diff --name-only HEAD HEAD~1` ]; then

echo "No Changes in the Build; exiting"
exit 0

else
# There are changes in the Build; push the changes to gh-pages
echo "There are changes in the Build; pushing the changes to gh-pages"

# Actual push to gh-pages branch via Travis
git push --force $SSH_REPO $TARGET_BRANCH
fi

 

Now this 70 lines of code handle all our heavy lifting and automates a large part of our CD. This makes sure that no incorrect builds are entering the gh-pages branch and also enabling smoother experience for both developers and maintainers.

The important aspect of this script is ability to make sure Travis is able to push to gh-pages. This requires the proper setup of Keys, and it definitely is the trickiest part the whole setup.

  • The first step is to generate the SSH key. This is done easily using terminal and ssh-keygen.
$ ssh-keygen -t rsa -b 4096 -C "[email protected]

 

  • I would recommend not using any passphrase as it will then be required by Travis and thus will be tricky to setup.
  • Now, this generates the RSA public/private key pair.
  • We now add this public deploy key to the settings of the repository.
  • After setting up the public key on GitHub we give the private key to Travis so that Travis is able to push on GitHub.
  • For doing this we use the Travis Client, this helps to encrypt the key properly and send the key and iv to the travis. Which then using these values is able to decrypt the private key.
$ travis encrypt-file deploy_key
encrypting deploy_key for domenic/travis-encrypt-file-example
storing result as deploy_key.enc
storing secure env variables for decryption

Please add the following to your build script (before_install stage in your .travis.yml, for instance):

    openssl aes-256-cbc -K $encrypted_0a6446eb3ae3_key -iv $encrypted_0a6446eb3ae3_key -in super_secret.txt.enc -out super_secret.txt -d

Pro Tip: You can add it automatically by running with --add.

Make sure to add deploy_key.enc to the git repository.
Make sure not to add deploy_key to the git repository.
Commit all changes to your .travis.yml.

 

  • Make sure to add deploy_key.enc to git repository and not to add deploy_key to git.

And after all these steps everything is done our client-side web application will deploy on every push on the master branch.

These steps are required only one time in project life cycle. At loklak search, we haven’t touched the deploy.sh since it was written, it’s a simple script but it does all the work of Continuous Deployment we want to achieve.

Generation of Demo Links and Test Deployments

This is also an essential part of the continuous agile development that developers are able to share what they have built and the maintainers to review those features and fixes. This becomes difficult in a web application as the fixes and features are more often than not visual and attaching screenshots with every PR become the hassle. If the developers are able to deploy their changes on their gh-pages and share the demo links with the PR then it’s a big win for development at a faster pace.

Now, this step is highly specific for Angular projects while there are are similar approaches for React and other frameworks as well, if not we can build the page easily and push our changes to gh-pages of our fork.

We use @angular/cli for building project then use angular-cli-ghpages npm package to actually push to gh-pages branch of the fork. These commands are combined and are provided as node command npm run deploy. And this makes our CD scheme complete.

Conclusion

Clearly, the continuous deployment scheme has a lot of advantages over the other methods especially in the client side web apps where there are a lot of PR’s. This essentially eliminates all the deployment hassles in a simple way that any deployment doesn’t require any manual interventions. The developers can simply concentrate on coding the application and maintainers can just simply review the PR’s by seeing the demo links and then merge when they feel like the PR is in good shape and the deployment is done all by the Shell Script without requiring the commands from a developer or a maintainer.

Links

Loklak Search GitHub Repository: https://github.com/fossasia/loklak_search

Loklak Search Application: http://loklak.net/

Loklak Search TravisCI: https://travis-ci.org/fossasia/loklak_search/

Deploy Script: https://github.com/fossasia/loklak_search/blob/master/deploy.sh

Further Resources

Fixing the scroll position in the Susper Frontend

An interesting problem that I encountered in the Susper frontend repository is the problem of the scroll position in SPAs (Single Page Applications). Since most websites now use Single page applications, such a hack, might prove useful to a lot of the readers.
Single page applications (SPAs) provide a better user experience. But, they are significantly harder to design and build. One major problem they cause is that they do not remember the scroll position on a page, like traditional browsers do. In traditional browsers, if we open a new page, by clicking on a link, it opens the page at the top.
Then on clicking back, it goes to not just to the previous link, but also the last position scrolled to on it. The issue we faced in Susper, was that when we opened a link, Susper being a SPA did not realise it was on a new page, and hence did not scroll to the top again. This was observed on every page, of the appliance.
Clicking on Terms on the footer for instance,

would open the bottom of the Terms page, which was not what we wanted.

FIX: Since all the pages required the fix, I ran a script in the main app component. Whenever an event occurs, the router instance detects it. Once the event has been identified as the end of a navigation action, I scroll the window to (0,0).
Here is the code snippet:

import {Component, OnInit} from [email protected]/core';

import { RouterModule, Router, NavigationEnd } from [email protected]/router';

@Component({

selector: 'app-root',

templateUrl: './app.component.html',

styleUrls: ['./app.component.css']

})

export class AppComponent implements OnInit {

title = 'Susper';

constructor(private router: Router) { }

ngOnInit() {

   this.router.events.subscribe((evt) => {

     if (!(evt instanceof NavigationEnd)) {

       return;

     }

     window.scrollTo(0, 0);

   });

 }

}

“NavigationEnd” is triggered on the end of a Navigation action, in Angular2. So if the “NavigationEnd” hasn’t been triggered, our function need not do anything else and can simply return.  If a Navigation action has just finished the window is made to scroll up to (0,0) coordinates.
Now, this is how the Terms page opens:

 

Done! Now every time a link is clicked it scrolls to the top.