Testing Presenter of MVP in Loklak Wok Android

Imagine working on a large source code, and as a new developer you are not sure whether the available source code works properly or not, you are surrounded by questions like, Are all these methods invoked properly or the number of times they need to be invoked? Being new to source code and checking manually already written code is a pain. For cases like these unit-tests are written. Unit-tests check whether the implemented code works as expected or not. This blog post explains about implementation of unit-tests of Presenter in a Model-View-Presenter (MVP) architecture in Loklak Wok Android.

Adding Dependencies to project

In app/build.gradle file

defaultConfig {
   ...
   testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}

dependencies {
   ...
   androidTestCompile 'org.mockito:mockito-android:2.8.47'
   androidTestCompile 'com.android.support:support-annotations:25.3.1'
   androidTestCompile 'com.android.support.test.espresso:espresso-core:2.2.2'
}

Setup for Unit-Tests

The presenter needs a realm database and an implementation of LoklakAPI interface. Along with that a mock of the View is required, so as to check whether the methods of View are called or not.

The LoklakAPI interface can be mocked easily using Mockito, but the realm database can’t be mocked. For this reason an in-memory realm database is created, which will be destroyed once all unit-test are executed. As the presenter is required for each unit-test method we instantiate the in-memory database before all the tests start i.e. by annotating a public static method with @BeforeClass, e.g. setDb method.

@BeforeClass
public static void setDb() {
   Realm.init(InstrumentationRegistry.getContext());
   RealmConfiguration testConfig = new RealmConfiguration.Builder()
           .inMemory()
           .name("test-db")
           .build();
   mDb = Realm.getInstance(testConfig);
}

 

NOTE: The in-memory database should be closed once all unit-tests are executed. So, for closing the databasse we create a public static method annotated with @AfterClass, e.g. closeDb method.

@AfterClass
public static void closeDb() {
   mDb.close();
}

 

Now, before each unit-test is executed we need to do some setup work like instantiating the presenter, a mock instance of API interface generated  by using mock static method and pushing in some sample data into the database. Our presenter uses RxJava and RxAndroid which depend on IO scheduler and MainThread scheduler to perform tasks asynchronously and these schedulers are not present in testing environment. So, we override RxJava and RxAndroid to use trampoline scheduler in place of IO and MainThread so that our test don’t encounter NullPointerException. All this is done in a public method annotated with @Before e.g. setUp.

@Before
public void setUp() throws Exception {
   // mocking view and api
   mMockView = mock(SuggestContract.View.class);
   mApi = mock(LoklakAPI.class);

   mPresenter = new SuggestPresenter(mApi, mDb);
   mPresenter.attachView(mMockView);

   queries = getFakeQueries();
   // overriding rxjava and rxandroid
   RxJavaPlugins.setIoSchedulerHandler(scheduler -> Schedulers.trampoline());
   RxAndroidPlugins.setMainThreadSchedulerHandler(scheduler -> Schedulers.trampoline());

   mDb.beginTransaction();
   mDb.copyToRealm(queries);
   mDb.commitTransaction();
}

 

Some fake suggestion queries are created which will be returned as observable when API interface is mocked. For this, simply two query objects are created and added to a List after their query parameter is set. This is implemented in getFakeQueries method.

private List<Query> getFakeQueries() {
   List<Query> queryList = new ArrayList<>();

   Query linux = new Query();
   linux.setQuery("linux");
   queryList.add(linux);

   Query india = new Query();
   india.setQuery("india");
   queryList.add(india);

   return queryList;
}

 

After that, a method is created which provides the created fake data wrapped inside an Observable as implemented in getFakeSuggestionsMethod method.

private Observable<SuggestData> getFakeSuggestions() {
   SuggestData suggestData = new SuggestData();
   suggestData.setQueries(queries);
   return Observable.just(suggestData);
}

 

Lastly, the mocking part is implemented using Mockito. This is really simple, when and thenReturn static methods of mockito are used for this. The method which would provide the fake data is invoked inside when and the fake data is passed as a parameter to thenReturn. For example, stubSuggestionsFromApi method

private void stubSuggestionsFromApi(Observable observable) {
   when(mApi.getSuggestions(anyString())).thenReturn(observable);
}

Finally, Unit-Tests

All the tests methods must be annotated with @Test.

Firstly, we test for a successful API request i.e. we get some suggestions from the Loklak Server. For this, getSuggestions method of LoklakAPI is mocked using stubSuggestionFromApi method and the observable to be returned is obtained using getFakeSuggestions method. Then, loadSuggestionFromAPI method is called, the one that we need to test. Once loadSuggestionFromAPI method is invoked, we then check whether the method of the View are invoked inside loadSuggestionFromAPI method, this is done using verify static method. The unit-test is implemented in testLoadSuggestionsFromApi method.

@Test
public void testLoadSuggestionsFromApi() {
   stubSuggestionsFromApi(getFakeSuggestions());

   mPresenter.loadSuggestionsFromAPI("", true);

   verify(mMockView).showProgressBar(true);
   verify(mMockView).onSuggestionFetchSuccessful(queries);
   verify(mMockView).showProgressBar(false);
}

 

Similarly, a failed network request for obtaining is suggestions is tested using testLoadSuggestionsFromApiFail method. Here, we pass an IOException throwable – wrapped inside an Observable – as parameter to stubSuggestionsFromApi.

@Test
public void testLoadSuggestionsFromApiFail() {
   Throwable throwable = new IOException();
   stubSuggestionsFromApi(Observable.error(throwable));

   mPresenter.loadSuggestionsFromAPI("", true);
   verify(mMockView).showProgressBar(true);
   verify(mMockView).showProgressBar(false);
   verify(mMockView).onSuggestionFetchError(throwable);
}

 

Lastly, we test if our suggestions are saved in the database by counting the number of saved suggestions and asserting that, in testSaveSuggestions method.

@Test
public void testSaveSuggestions() {
   mPresenter.saveSuggestions(queries);
   int count = mDb.where(Query.class).findAll().size();
  // queries is the List that contains the fake suggestions
   assertEquals(queries.size(), count);
}

Resources:

Continue ReadingTesting Presenter of MVP in Loklak Wok Android

Service Workers in Loklak Search

Loklak search is a web application which is built on latest web technologies and is aiming to be a progressive web application. A PWA is a web application which has a rich, reliable, fast, and engaging web experience, and web API which enables us to get these are Service Workers. This blog post describes the basics of service workers and their usage in the Loklak Search application to act as a Network Proxy to and the programmatical cache controller for static resources.

What are Service Workers?

In the very formal definition, Matt Gaunt describes service workers to be a script that the browser runs in the background, and help us enable all the modern web features. Most these features include intercepting network requests and caching and responding from the cache in a more programmatical way, and independent from native browser based caching. To register a service worker in the application is a really simple task, there is just one thing which should be kept in mind, that service workers need the HTTPS connection, to work, and this is the web standard made around the secure protocol. To register a service worker

if ('serviceWorker' in navigator) {
window.addEventListener('load', function() {
navigator.serviceWorker.register('/sw.js').then(function(registration) {
// Registration was successful
console.log('ServiceWorker registration successful with scope: ', registration.scope);
}, function(err) {
// registration failed :(
console.log('ServiceWorker registration failed: ', err);
});
});
}

This piece of javascript, if the browser supports, registers the service worker defined by sw.js. The service worker then goes through its lifecycle, and gets installed and then it takes control of the page it gets registered with.

What does service workers solve in Loklak Search?

In loklak search, service workers currently work as a, network proxy to work as a caching mechanism for static resources. These static resources include the all the bundled js files and images. These bundled chunks are cached in the service workers cache and are responded with from the cache when requested. The chunking of assets have an advantage in this caching strategy, as the cache misses only happen for the chunks which are modified, and the parts of the application which are unmodified are served from the cache making it possible for lesser download of assets to be served.

Service workers and Angular

As the loklak search is an angular application we, have used the @angular/service-worker library to implement the service workers. This is simple to integrate library and works with the, CLI, there are two steps to enable this, first is to download the Service Worker package

npm install --save @angular/service-worker

And the second step is to enable the service worker flag in .angular-cli.json

"apps": [
   {
      // Other Configurations
      serviceWorker: true
   }
]

Now when we generate the production build from the CLI, along with all the application chunks we get, The three files related to the service workers as well

  • sw-register.bundle.js : This is a simple register script which is included in the index page to register the service worker.
  • worker-basic.js : This is the main service worker logic, which handles all the caching strategies.
  • ngsw-manifest.json : This is a simple manifest which contains the all the assets to be cached along with their version hashes for cache busting.

Future enhancements in Loklak Search with Service Workers

The service workers are fresh in loklak search and are currently just used for caching the static resources. We will be using service workers for more sophisticated caching strategies like

  • Dynamically caching the results and resources received from the API
  • Using IndexedDB interface with service workers for storing the API response in a structured manner.
  • Using service workers, and app manifest to provide the app like experience to the user.

 

Resources and Links

Continue ReadingService Workers in Loklak Search

Implementing Direct URL in loklak Media Wall

Direct URL is a web address which redirects the user to the preset customized media wall so that the media wall can directly be used to be displayed on the screen. Loklak media wall provides direct URL which has information related to customizations set by the user included in the web address. These customizations, as the query parameters are detected when the page is initialized and actions are dispatched to make changes in the state properties, and hence, the UI properties and the expected behaviour of media wall.

In this blog, I would be explaining how I implemented direct URL in loklak media wall and how customizations are detected to build on initialization of component, a customized media wall.

Flow Chart

Working

Media Wall Direct URL effect

This effect detects when the WALL_GENERATE_DIRECT_URL action is dispatched and creates a direct URL string from all the customization state properties and dispatches a side action WallShortenDirectUrlAction() and stores direct URL string as a state property. For this, we need to get individual wall customization state properties and create an object for it and supply it as a parameter to the generateDirectUrl() function. Direct URL string is returned from the function and now, the action is dispatched to store this string as a state property.

@Effect()
generateDirectUrl$: Observable<Action>
= this.actions$
.ofType(mediaWallDirectUrlAction.ActionTypes.WALL_GENERATE_DIRECT_URL)
.withLatestFrom(this.store$)
.map(([action, state]) => {
return {
query: state.mediaWallQuery.query,
.
.
.
wallBackground: state.mediaWallCustom.wallBackground
};
})
.map(queryObject => {
const configSet = {
queryString: queryObject.query.displayString,
.
.
.
wallBackgroundColor: queryObject.wallBackground.backgroundColor
}
const shortenedUrl = generateDirectUrl(configSet);
return new mediaWallDirectUrlAction.WallShortenDirectUrlAction(shortenedUrl);
});

Generate Direct URL function

This function generates Direct URL string from all the current customization options value. Now,  keys of the object are separated out and for each element of the object, it checks if there is some current value for the elements and it then first parses the value of the element into URI format and then, adds it to the direct URL string. In such a way, we are creating a direct URL string with these customizations provided as the query parameters.

export function generateDirectUrl(customization: any): string {
const shortenedUrl = ;const activeFilterArray: string[] = new Array<string>();
let qs = ;
Object.keys(customization).forEach(config => {
if (customization[config] !== undefined && customization[config] !== null) {
if (config !== ‘blockedUser’ && config !== ‘hiddenFeedId’) {
qs += `${config}=${encodeURIComponent(customization[config])}&`;
}
else {
if (customization[config].length > 0) {
qs += `${config}= ${encodeURIComponent(customization[config].join(‘,’))}&`;
}
}
}
});
qs += `ref=share`;
return qs;
}

Creating a customized media wall

Whenever the user searches for the URL link on the web, a customized media wall must be created on initialization. The media wall component detects and subscribes to the URL query parameters using the queryParams API of the ActivatedRoute. Now, the values are parsed to a required format of payload and the respective actions are dispatched according to the value of the parameters. Now, when all the actions are dispatched, state properties changes accordingly. This creates a unidirectional flow of the state properties from the URL parameters to the template. Now, the state properties that are supplied to the template are detected and a customized media wall is created.

private queryFromURL(): void {
this.__subscriptions__.push(
this.route.queryParams
.subscribe((params: Params) => {
const config = {
queryString: params[‘queryString’] || ,
imageFilter: params[‘imageFilter’] || ,
profanityCheck: params[‘profanityCheck’] || ,
removeDuplicate: params[‘removeDuplicate’] || ,
wallHeaderBackgroundColor: params[‘wallHeaderBackgroundColor’] || ,
wallCardBackgroundColor: params[‘wallCardBackgroundColor’] || ,
wallBackgroundColor: params[‘wallBackgroundColor’] ||
}
this.setConfig(config);
})
);
}public setConfig(configSet: any) {
if (configSet[‘displayHeader’]) {
const isTrueSet = (configSet[‘displayHeader’] === ‘true’);
this.store.dispatch(new mediaWallDesignAction.WallDisplayHeaderAction(isTrueSet));
}
.
.
if (configSet[‘queryString’] || configSet[‘imageFilter’] || configSet[‘location’]) {
if (configSet[‘location’] === ‘null’) {
configSet[‘location’] = null;
}
const isTrueSet = (configSet[‘imageFilter’] === ‘true’);
const query = {
displayString: configSet[‘queryString’],
queryString: ,
routerString: configSet[‘queryString’],
filter: {
video: false,
image: isTrueSet
},
location: configSet[‘location’],
timeBound: {
since: null,
until: null
},
from: false
}
this.store.dispatch(new mediaWallAction.WallQueryChangeAction(query));
}
}

Now, the state properties are rendered accordingly and a customized media wall is created. This saves a lot of effort by the user to change the customization options whenever uses the loklak media wall.

Reference

Continue ReadingImplementing Direct URL in loklak Media Wall

MVP in Loklak Wok Android using Dagger2

MVP stands for Model-View-Presenter, one of the most popular and commonly used design pattern in android apps. Where “Model” refers to data source, it can be a SharedPreference, Database or data from a Network call. Going by the word, “View” is the user interface and finally “Presenter”, it’s a mediator between model and view. Whatever events occur in a view are passed to presenter and the presenter fetches the data from the model and finally passes it back to the view, where the data is populated in ViewGroups. Now, the main question, why it is so widely used? One of the obvious reason is the simplicity to implement it and it completely separates the business logic, so, easy to write unit-tests. Though it is easy to implement, its implementation requires a lot of boilerplate code, which is one of its downpoints. But, using Dagger2 the boilerplate code can be reduced to a great extent. Let’s see how Dagger2 is used in Loklak Wok Android to implement MVP architecture.

Adding Dagger2 to the project

In app/build.gradle file

dependencies {
   ...
   compile 'com.google.dagger:dagger:2.11'
    annotationProcessor 'com.google.dagger:dagger-compiler:2.11'
}

 

Implementation

First a contract is created which defines the behaviour or say the functionality of View and Presenter. Like showing a progress bar when data is being fetched, or the view when the network request is successful or it failed. The contract should be easy to read and going by the names of the method one should be able to know the functionality of methods. For tweet search suggestions, the contract is defined in SuggestContract interface.

public interface SuggestContract {

   interface View {

       void showProgressBar(boolean show);

       void onSuggestionFetchSuccessful(List<Query> queries);

       void onSuggestionFetchError(Throwable throwable);
   }

   interface Presenter {

       void attachView(View view);

       void createCompositeDisposable();

       void loadSuggestionsFromAPI(String query, boolean showProgressBar);

       void loadSuggestionsFromDatabase();

       void saveSuggestions(List<Query> queries);

       void suggestionQueryChanged(Observable<CharSequence> observable);

       void detachView();
   }
}

 

A SuggestPresenter class is created which implements the SuggestContract.Presenter interface. I will not be explaining how each methods in SuggestPresenter class is implemented as this blog solely deals with implementing MVP. If you are interested you can go through the source code of SuggestPresenter. Similarly, the view i.e. SuggestFragment implements SuggestContract.View interface.

So, till this point we have our presenter and view ready. The presenter needs to access the model and the view requires to have an instance of presenter. One way could be instantiating an instance of model inside presenter and an instance of presenter inside view. But, this way model, view and presenter would be coupled and that defeats our purpose. So, we just INJECT model into presenter and presenter into view using Dagger2. Injecting here means Dagger2 instantiates model and presenter and provides wherever they are requested.

ApplicationModule provides the required dependencies for accessing the “Model” i.e. a Loklak API client and realm database instance. When we want Dagger2 to provide a dependency we create a method annotated with @Provides as providesLoklakAPI and providesRealm.

@Provides
LoklakAPI providesLoklakAPI(Retrofit retrofit) {
   return retrofit.create(LoklakAPI.class);
}

@Provides
Realm providesRealm() {
   return Realm.getDefaultInstance();
}

 

If we look closely providesLoklakAPI method requires a Retrofit instance i.e. a to create an instance of LoklakAPI the required dependency is Retrofit, which is fulfilled by providesRetrofit method. Always remember that whenever a dependency is required, it should not be instantiated at the required place, rather it should be injected by Dagger2.

@Provides
Retrofit providesRetrofit() {
   Gson gson = Utility.getGsonForPrivateVariableClass();
   return new Retrofit.Builder()
           .baseUrl(mBaseUrl)
           .addCallAdapterFactory(RxJava2CallAdapterFactory.create())
           .addConverterFactory(GsonConverterFactory.create(gson))
           .build();
}

 

As the ApplicationModule class provides these dependencies the class is annotated with @Module.

@Module
public class ApplicationModule {

   private String mBaseUrl;

   public ApplicationModule(String baseUrl) {
       this.mBaseUrl = baseUrl;
   }
   
   
   // retrofit, LoklakAPI, realm @Provides methods
}


After preparing the source to provide the dependencies, it’s time we request the dependencies.

Dependencies are requested simply by using @Inject annotation e.g. in the constructor of SuggestPresenter @Inject is used, due to which Dagger2 provides instance of LoklakAPI and Realm for constructing an object of SuggestPresenter.

public class SuggestPresenter implements SuggestContract.Presenter {

   private final Realm mRealm;
   private LoklakAPI mLoklakAPI;
   private SuggestContract.View mView;
   ...

   @Inject
   public SuggestPresenter(LoklakAPI loklakAPI, Realm realm) {
       this.mLoklakAPI = loklakAPI;
       this.mRealm = realm;
       ...
   }
   
   // implementation of methods defined in contract
}


@Inject can be used on the fields also. When @Inject is used with a constructor the class also becomes a dependency provider, this way creating a method with @Provides is not required in a Module class.

Now, it’s time to connect the dependency providers and dependency requesters. This is done by creating a Component interface, here ApplicationComponent. The component interface defines where are the dependencies required. This is only for those cases where dependencies are injected by using @Inject for the member variables. So, we define a method inject with a single parameter of type SuggestFragment, as the Presenter needs to be injected in SuggestFragment.

@Component(modules = ApplicationModule.class)
public interface ApplicationComponent {


   void inject(SuggestFragment suggestFragment);

}

 

The component interface is instantiated in onCreate method of LoklakWokApplication class, so that it is accessible all over the project.

public class LoklakWokApplication extends Application {

   private ApplicationComponent mApplicationComponent;

   @Override
   public void onCreate() {
       super.onCreate();
      ...
       mApplicationComponent = DaggerApplicationComponent.builder()
               .applicationModule(new ApplicationModule(Constants.BASE_URL_LOKLAK))
               .build();
   }

   public ApplicationComponent getApplicationComponent() {
       return mApplicationComponent;
   }
   
   ...
}


NOTE: DaggerApplicationComponent is created after building the project. So, AndroidStudio will show “Cannot resolve symbol …”, thus build the project : Build > Make Module ‘app’.

Finally, in the onCreateView callback of SuggestFragment we call inject method of DaggerApplicationComponent to tell Dagger2 that SuggestFragment is requesting dependencies.

@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
                        Bundle savedInstanceState) {
...   
   LoklakWokApplication application = (LoklakWokApplication) getActivity().getApplication();
   application.getApplicationComponent().inject(this);
   suggestPresenter.attachView(this);

   return rootView;
}

Resources:

Continue ReadingMVP in Loklak Wok Android using Dagger2

Animations in Loklak Wok Android

Imagine an Activity popping out of nowhere suddenly in front of the user. And even more irritating, the user doesn’t even know whether a button was clicked. Though these are very small animation implementations but these animations enhance the user experience to a new level. This blog deals with the animations in Loklak Wok Android, a peer message harvester of Loklak Server.

Activity transition animation

Activity transition is applied when we move from a current activity to a new activity or just go back to an old activity by pressing back button.

In Loklak Wok Android, when user navigates for search suggestions from TweetHarvestingActivity to SuggestActivity, the new activity i.e. SuggestActivity comes from right side of the screen and the old one i.e. TweetHarvestingActivity leaves the screen through the left side. This is an example of left-right activity transition. For implementing this, two xml files which define the animations are created, enter.xml and exit.xml are created.

<set
   xmlns:android="http://schemas.android.com/apk/res/android"
   android:shareInterpolator="false">

   <translate
       android:duration="500"
       android:fromXDelta="100%"
       android:toXDelta="0%"/>
</set>

 

NOTE: The entering activity comes from right side, that’s why android:fromXDelta parameter is set to 100% and as the activity finally stays at extreme left, android:toXDelta parameter is set to 0%.

As the current activity, in this case TweetHarvestingActivity, leaves the screen from left to the negative of left. So, in exit.xml the android:fromXDelta parameter is set to 0% and android:toXDelta parameter is set to -100%.

Now, that we are done with defining the animations in xml, it’s time we apply the animations, which is really easy. The animations are applied by invoking Activity.overridePendingTransition(enterAnim, exitAnim) just after the startActivity method. For example, in openSuggestActivity

private void openSuggestActivity() {
   Intent intent = new Intent(getActivity(), SuggestActivity.class);
   startActivity(intent);
   getActivity().overridePendingTransition(R.anim.enter, R.anim.exit);
}

 

Touch Selectors

Using touch selectors background color of a button or any clickable can be changed, this way a user can see that the clickable responded to the click. The background is usually light accent color or a lighter shade of the icon present in button.

There are three states involved while a clickable is touched, pressed, activated and selected. And a default state, i.e. the clickable is not clicked. The background color of each state is defined in a xml file like media_button_selector, which is present in drawable directory.

<selector xmlns:android="http://schemas.android.com/apk/res/android">

   <item android:drawable="@color/media_button_touch_selector_backgroud" android:state_pressed="true"/>
   <item android:drawable="@color/media_button_touch_selector_backgroud" android:state_activated="true"/>
   <item android:drawable="@color/media_button_touch_selector_backgroud" android:state_selected="true"/>

   <item android:drawable="@android:color/transparent"/>
</selector>

 

The selector is applied by setting it as the background of a clickable, for example, touch selector applied on Location image button present in fragment_tweet_posting.xml .

<ImageButton
   android:layout_width="40dp"
   android:layout_height="40dp"
   
   android:background="@drawable/media_button_selector" />

 

Notice the change in the background color of the buttons when clicked.

Resources:

Some youtube videos for getting started:

Continue ReadingAnimations in Loklak Wok Android

Analyzing Production Build Size in Loklak Search

Loklak search being a web application it is critical to keep the size of the application in check to ensure that we are not transferring any non-essential bytes to the user so that application load is faster, and we are able to get the minimal first paint time. This requires a mechanism for the ability to check the size of the build files which are generated and served to the user. Alongside the ability to check sizes it is also critically important to analyze the distribution of the modules along with their sizes in various chunks. In this blog post, I discuss the analysis of the application code of loklak search and the generated build files.

Importance of Analysis

The chunk size analysis is critical to any application, as the chunk size of any application directly determines the performance of any application, at any scale. The smaller the application the lesser is the load time, thus faster it becomes usable at the user side. The time to first-paint is the most important metric to keep in mind while analyzing any web application for performance, though the first paint time consists of many critical parts from loading, parsing, layout and paint, but still the size of any chunk determines all the time it will take to render it on the screen.

Also as we use the 3rd party libraries and components it becomes crucially important to inspect the impact on the size of the application upon the inclusion of those libraries and components.

Development Phase Checking

Angular CLI provides a clean mechanism to track and check the size of all the chunks always at the runtime, these stats simply show the size of each chunk in the application in the terminal on every successful compilation, and this provides us a broad idea about the chunks to look and address.

Deep Analysis using Webpack Bundle Analyzer

The angular cli while generating the production build provides us with an option to generates the statistics about the chunks including the size and namespaces of the each module which is part of that chunk. These stats are directly generated by the webpack at the time of bundling, code splitting, and tree shaking. These statistics thus provide us to peek into the actual deeper level of chunk creation in webpack to analyze sizes of its various components. To generate the statistics we just need to enable the –stats-json flag while building.

ng serve --prod --aot --stats-json

This will generate the statistics file for the application in the /dist directory, alongside all the bundles. Now to have the visual and graphical analysis of these statistics we can use a tool like webpack-bundle-analyzer to analyze the statistics. We can install the webpack-bundle-analyzer via npm,

npm install --save-dev webpack-bundle-analyzer

Now, to our package.json we can add a script, running this script will open up a web page which contains graphical visualization of all the chunks build in the application

// package.json

{
   …
   …
   {
      “scripts”: {
         …
         …
         "analyze": "webpack-bundle-analyzer dist/stats.json"
      }
   }
}

These block diagrams also contain the information about the sub modules contained in each chunk, and thus we can easily analyze and compare the size of each component we add in the application.

Now, we can see in the above distribution, the main.bundle is of the largest size among all the other chunks. And the major part of it is being occupied by, moment.js, this analysis provides us with a deeper insight into the impact of a module like moment.js on the application size. This helps us to reason about the analyze which part of the application is worth, and which parts of the application can be replaced with lighter alternatives and which parts of the application are worth the size they are consuming, as for a 3rd party module which consumes a lot of sizes but is used in some insignificant feature, must be replaced with a lightweight alternative.

Conclusion

Thus being able to see the description of modules in each and every chunk provides us with a method to reason about, and compare the alternative approaches for a particular solution to a problem, in terms of the effect of those approaches on the size of the application so we are able to make the best decision.

Resources and Links

  • Analyzing the builds blog by hackernoon
  • Bundle analysis for webpack applications blog by Nimesh
Continue ReadingAnalyzing Production Build Size in Loklak Search

Using CSS Grid in Loklak Search

CSS Grid is the latest web standard for the layouts in the web applications. This is the web standard which allows the HTML page to be viewed as 2-dimensional for laying out the elements in the page. It is thus used in parts of loklak search for layout. In this blog post, I will discuss the basic naming convention for CSS grid and its usage in Loklak Search for layout structuring and responsiveness.

CSS Grid Basics

There are some basic terminologies regarding grid few major ones are the following

Grid Container

The grid container is the container which is the wrapper of all the grid items. It is declared by display: grid, this makes all the direct children of that element to become grid items.

Grid Tracks

We define rows and columns of the grid as the lines, the area between any two lines is called a grid track. Tracks can be defined using any length unit. Grid also introduces an additional length unit to help us create flexible grid tracks. The new fr unit represents a fraction of the available space in the grid container.

Grid Cells

The area between any two horizontal and vertical lines is called a grid cell.

Grid Area

The area formed by the combination of two or more cells is called a grid area.

Using CSS grid in Loklak Search

The CSS grid is used in loklak search uses CSS grid in the feeds page to align elements in a responsive way on mobile and desktop. Earlier there was the issue that on small displays the info box of the results appeared after the feed results, and we needed to make sure that it appears on top on smaller displays. This is the outline of the structure of the feed page.

<div class=”feed-wrapper”>
<div class=”feed-results”>
<!-- Feed Results -->
</div>

<div class=”feed-info-box”>
<!-- Feed Info Box -->
</div>
</div>

Now we use the CSS grid to position the items according to the display width. First we declare the “feed-wrapper” as display:grid to make it a Grid Container, and we associate the rows and columns accordingly.

.feed-wrapper {
   display: grid;
   grid-template-columns: 150px 632px 455px 1fr;
   grid-template-rows: auto;
}

This defines the grid to be consisting of 4 columns of width 150px, 632px,  455px and one remaining unit i.e. 1fr. The rows are set to be auto.

Now we define the grid areas i.e. the names of the areas using the grid-area:<area> css property. This gives names to the elements in the CSS grid.

.feed-results {
   grid-area: feed-results;
}

.feed-info-box {
   grid-area: feed-info-box;
}

The last thing which remains now is to specify the position of these grid elements in the grid cells according to the display width, we use simple media queries along with simple grid area positioning property, i.e. grid-template-areas.

.feed-wrapper {
   /* Other Properties */
   @media(min-width: 1200px) {
      grid-template-areas: ". feed-results feed-info-box .";
   }

   @media(max-width: 1199px) {
      grid-template-columns: 1fr;
      grid-template-areas:
         "feed-info-box"
         "feed-results";
   }
}

This positions both the boxes according to the display width, in one column for large displays, and info box on top of results on mobile displays.

This is how it looks on the large desktop displays

 

This is how it looks on small mobile displays

Links and References

 

 

Continue ReadingUsing CSS Grid in Loklak Search

Adding additional information to store listing page of Loklak apps site

Loklak apps site has now got a completely functional store listing page where users can find all relevant information about the app which they want to view. The page has a left side bar which shows various categories to switch between, a right sidebar for suggesting similar kind of apps to users and a middle section to provide users with various important informations about the app like getting started, use of app, promo images, preview images, test link and various other details. In this blog I will be describing how the bottom section of the middle column has been created (related issue: #209).

The bottom section

The bottom section provides various informations like updated, version, app source, developer information, contributors, technology stack, license. All these informations has to be dynamically loaded for each selected app. As I had previously mentioned here, no HTML content can be hard coded in the store listing page. So how do we show the above mentioned informations for the different apps? Well, for this we will once again use the app.json of the corresponding app like we had done for the middle section here.

At first, for a given app we need to define some extra fields in the app.json file as shown below.

"appSource": "https://github.com/fossasia/apps.loklak.org/tree/master/MultiLinePlotter",
  "contributors": [{"name": "djmgit", "url": "http://djmgit.github.io/"}],
  "techStack": ["HTML", "CSS", "AngularJs", "Morris.js", "Bootstrap", "Loklak API"],
  "license": {"name": "LGPL 2.1", "url": "https://www.gnu.org/licenses/old-licenses/lgpl-2.1"},
  "version": "1.0",
  "updated": "June 10,2017",

The above code snippet shows the new fields included in app.json. The fields are as described below.

  • appSource: Stores link to the source code of the app.
  • Contributors: Stores a list containing objects. Each object stores name of the contributor and an url corresponding to that contributor.
  • techStack: A list containing names of the technologies used.
  • License: Name and link of the license.
  • Version: The current version of the app.
  • Updated: Date on which the app was last updated.

These fields provide the source for the informations present in the bottom section of the app.

Now we need to render these information on the store listing page. Let us take an example. Let us see how version is rendered.

<div ng-if="appData.version !== undefined && appData.version !== ''" class="col-md-4 add-info">
                  <div class="info-type">
                    <h5 class="info-header">
                      <strong>Version</strong>
                    </h5>
                  </div>
                  <div class="info-body">
                    {{appData.version}}
                  </div>
                </div>

We first check if version field is defined and version is not empty. Then we print a header (Version in this case) and then we print the value. This is how updated, appSource and license are also displayed. What about technology stack and contributors? Technology stack is basically an list and it may contain quite a number of strings(technology names). If we display all the values at once the bottom section will get crowded and it may degrade the UI of the page.To avoid this a popup dialog has been used. When user clicks on the technology stack label, a popup dialogue appears which shows the various technologies used in the app.

<div class="info-body">
                    <div class="dropdown">
                      <div class="dropdown-toggle" type="button" data-toggle="dropdown">
                        View technology stack
                      </div>
                      <ul class="dropdown-menu">
                        <li ng-repeat="item in appData.techStack" class="tech-item">
                           {{item}}
                        </li>
                      </ul>
                    </div>
                  </div>

After displaying a header, we iterate over the techStack list and populate our popup dialogue. This popup dialogue is attached to the label ‘View technology stack‘. Whenever a user clicks on this label, the popup is shown. The same technique technique is also applied for rendering contributors. A popup dialogue is used to display all the contributors. Thus technology stack and contributors list is shown only on demand.

For developer information, name of the developer is shown which is linked to his/her website and there is an option to send email or copy email id if present.

<div class="info-body">
                    <span ng-if="appData.author.url !== undefined && appData.author.url !== ''">
                      <a href="{{appData.author.url}}"> {{appData.author.name}} </a>
                    </span>
                    <a ng-if="appData.author.email !== undefined && appData.author.email !== ''" class="mail"
                      href="mailto:{{appData.author.email}}">
                      <span class="glyphicon glyphicon-envelope"></span>
                    </a>
                  </div>



For email id, bootstrap’s email glyphicon is used along with a mailto link pointing to the developer’s email id. What does mailto do? It simply opens your default mail client. For example if you are on linux, it might open Thunderbird. If you do not have a mail client installed, but your default browser is google chrome, it will open gmail mail composer. If you are viewing the site on android device, it will open gmail app directly.

The bottom section can be viewed here.

Important resources

 

Continue ReadingAdding additional information to store listing page of Loklak apps site

Automatic Signing and Publishing of Android Apps from Travis

As I discussed about preparing the apps in Play Store for automatic deployment and Google App Signing in previous blogs, in this blog, I’ll talk about how to use Travis Ci to automatically sign and publish the apps using fastlane, as well as how to upload sensitive information like signing keys and publishing JSON to the Open Source repository. This method will be used to publish the following Android Apps:

Current Project Structure

The example project I have used to set up the process has the following structure:

It’s a normal Android Project with some .travis.yml and some additional bash scripts in scripts folder. The update-apk.sh file is standard app build and repo push file found in FOSSASIA projects. The process used to develop it is documented in previous blogs. First, we’ll see how to upload our keys to the repo after encrypting them.

Encrypting keys using Travis

Travis provides a very nice documentation on encrypting files containing sensitive information, but a crucial information is buried below the page. As you’d normally want to upload two things to the repo – the app signing key, and API JSON file for release manager API of Google Play for Fastlane, you can’t do it separately by using standard file encryption command for travis as it will override the previous encrypted file’s secret. In order to do so, you need to create a tarball of all the files that need to be encrypted and encrypt that tar instead. Along with this, before you need to use the file, you’ll have to decrypt in in the travis build and also uncompress it for use.

So, first install Travis CLI tool and login using travis login (You should have right access to the repo and Travis CI in order to encrypt the files for it)

Then add the signing key and fastlane json in the scripts folder. Let’s assume the names of the files are key.jks and fastlane.json

Then, go to scripts folder and run this command to create a tar of these files:

tar cvf secrets.tar fastlane.json key.jks

 

secrets.tar will be created in the folder. Now, run this command to encrypt the file

travis encrypt-file secrets.tar

 

A new file secrets.tar.enc will be created in the folder. Now delete the original files and secrets tar so they do not get added to the repo by mistake. The output log will show the the command for decryption of the file to be added to the .travis.yml file.

Decrypting keys using Travis

But if we add it there, the keys will be decrypted for each commit on each branch. We want it to happen only for master branch as we only require publishing from that branch. So, we’ll create a bash script prep-key.sh for the task with following content

#!/bin/sh
set -e

export DEPLOY_BRANCH=${DEPLOY_BRANCH:-master}

if [ "$TRAVIS_PULL_REQUEST" != "false" -o "$TRAVIS_REPO_SLUG" != "iamareebjamal/android-test-fastlane" -o "$TRAVIS_BRANCH" != "$DEPLOY_BRANCH" ]; then
    echo "We decrypt key only for pushes to the master branch and not PRs. So, skip."
    exit 0
fi

openssl aes-256-cbc -K $encrypted_4dd7_key -iv $encrypted_4dd7_iv -in ./scripts/secrets.tar.enc -out ./scripts/secrets.tar -d
tar xvf ./scripts/secrets.tar -C scripts/

 

Of course, you’ll have to change the commands and arguments according to your need and repo. Specially, the decryption command keys ID

The script checks if the repo and branch are correct, and the commit is not of a PR, then decrypts the file and extracts them in appropriate directory

Before signing the app, you’ll need to store the keystore password, alias and key password in Travis Environment Variables. Once you have done that, you can proceed to signing the app. I’ll assume the variable names to be $STORE_PASS, $ALIAS and $KEY_PASS respectively

Signing App

Now, come to the part in upload-apk.sh script where you have the unsigned release app built. Let’s assume its name is app-release-unsigned.apk.Then run this command to sign it

cp app-release-unsigned.apk app-release-unaligned.apk
jarsigner -verbose -tsa http://timestamp.comodoca.com/rfc3161 -sigalg SHA1withRSA -digestalg SHA1 -keystore ../scripts/key.jks -storepass $STORE_PASS -keypass $KEY_PASS app-release-unaligned.apk $ALIAS

 

Then run this command to zipalign the app

${ANDROID_HOME}/build-tools/25.0.2/zipalign -v -p 4 app-release-unaligned.apk app-release.apk

 

Remember that the build tools version should be the same as the one specified in .travis.yml

This will create an apk named app-release.apk

Publishing App

This is the easiest step. First install fastlane using this command

gem install fastlane

 

Then run this command to publish the app to alpha channel on Play Store

fastlane supply --apk app-release.apk --track alpha --json_key ../scripts/fastlane.json --package_name com.iamareebjamal.fastlane

 

You can always configure the arguments according to your need. Also notice that you have to provide the package name for Fastlane to know which app to update. This can also be stored as an environment variable.

This is all for this blog, you can read more about travis CLI, fastlane features and signing process in these links below:

Continue ReadingAutomatic Signing and Publishing of Android Apps from Travis

Setting Loklak Server with SSL

Loklak Server is based on embedded Jetty Server which can work both with or without SSL encryption. Lately, there was need to setup Loklak Server with SSL. Though the need was satisfied by CloudFlare. Alternatively, there are 2 ways to set up Loklak Server with SSL. They are:-

1) Default Jetty Implementation

There is pre-existing implementation of Jetty libraries. The http mode can be set in configuration file. There are 4 modes on which Loklak Server can work: http mode, https mode, only https mode and redirect to https mode. Loklak Server listens to port 9000 when in http mode and to port 9443 when in https mode.

There is also a need of SSL certificate which is to be added in configuration file.

2) Getting SSL certificate with Kube-Lego on Kubernetes Deployment

I got to know about Kube-Lego by @niranjan94. It is implemented in Open-Event-Orga-Server. The approach is to use:

a) Nginx as ingress controller

For setting up Nginx ingress controller, a yml file is needed which downloads and configures the server.

The configurations for data requests and response are:

proxy-connect-timeout: "15"
 proxy-read-timeout: "600"
 proxy-send-imeout: "600"
 hsts-include-subdomains: "false"
 body-size: "64m"
 server-name-hash-bucket-size: "256"
 server-tokens: "false"

Nginx is configured to work on both http and https ports in service.yml

ports:
- port: 80
  name: http
- port: 443
  name: https

 

b) Kube-Lego for fetching SSL certificates from Let’s Encrypt

Kube-Lego was set up with default values in yml. It uses the host-name, email address and secretname of the deployment to validate url and fetch SSL certificate from Let’s Encrypt.

c) Setup configurations related to TLS and no-TLS connection

These configuration files mentions the path and service ports for Nginx Server through which requests are forwarded to backend Loklak Server. Here for no-TLS and TLS requests, the requests are directly forwarded to localhost at port 80 of Loklak Server container.

rules:
- host: staging.loklak.org
  http:
  paths:
  - path: /
    backend:
    serviceName: server
    servicePort: 80

For TLS requests, the secret name is also mentioned. Kube-Lego fetches host-name and secret-name from here for the certificate

tls:
- hosts:
- staging.loklak.org
  secretName: loklak-api-tls

d) Loklak Server, ElasticSearch and Mosquitto at backend

These containers work at backend. ElasticSearch and Mosquitto are only accessible to Loklak Server. Loklak Server can be accessed through Nginx server. Loklak Server is configured to work at http mode and is exposed at port 80.

ports:
- port: 80
  protocol: TCP
  targetPort: 80

To deploy the Loklak Server, all these are deployed in separate pods and they interact through service ports. To deploy, we use deploy script:

# For elasticsearch, accessible only to api-server
kubectl create -R -f ${path-to-config-file}/elasticsearch/

# For mqtt, accessible only to api-server
kubectl create -R -f ${path-to-config-file}/mosquitto/

# Start KubeLego deployment for TLS certificates
kubectl create -R -f ${path-to-config-file}/lego/
kubectl create -R -f ${path-to-config-file}/nginx/

# Create web namespace, this acts as bridge to Loklak Server
kubectl create -R -f ${path-to-config-file}/web/

# Create API server deployment and expose the services
kubectl create -R -f ${path-to-config-file}/api-server/

# Get the IP address of the deployment to be used
kubectl get services --namespace=nginx-ingress

References

Continue ReadingSetting Loklak Server with SSL