Data Indexing in Loklak Server

Loklak Server is a data-scraping system that indexes all the scraped data for the purpose to optimize it. The data fetched by different users is stored as cache. This helps in retrieving of data directly from cache for recurring queries. When users search for the same queries, load on Loklak Server is reduced by outputting indexed data, thus optimizing the operations. Application It is dependent on ElasticSearch for indexing of cached data (as JSON). The data that is fetched by different users is stored as cache. This helps in fetching data directly from cache for same queries. When users search for the same queries, load on Loklak Server is reduced and it is optimized by outputting indexed data instead of scraping the same date again. When is data indexing done? The indexing of data is done when: 1) Data is scraped: When data is scraped, data is indexed concurrently while cleaning of data in TwitterTweet data object. For this task, addScheduler static method of IncomingMessageBuffer is used, which acts as abstract between scraping of data and storing and indexing of data. The following is the implementation from TwitterScraper (from here). Here writeToIndex is the boolean input to whether index the data or not. if (this.writeToIndex) IncomingMessageBuffer.addScheduler(this, this.user, true); 2) Data is fetched from backend: When data is fetched from backend, it is indexed in Timeline iterator. It calls the above method to index data concurrently. The following is the definition of writeToIndex() method from Timeline.java (from here). When writeToIndex() is called, the fetched data is indexed. public void writeToIndex() { IncomingMessageBuffer.addScheduler(this, true); } How? When addScheduler static method of IncomingMessageBuffer is called, a thread is started that indexes all data. When the messagequeue data structure is filled with some messages, indexing continues. See here . The DAO method writeMessageBulk is called here to write data. The data is then written to the following streams: 1) Dump: The data fetched is dumped into Import directory in a file. It can also be fetched from other peers. 2) Index: The data fetched is checked if it exists in the index and data that isn't indexed is indexed. public static Set<String> writeMessageBulk(Collection<MessageWrapper> mws) { List<MessageWrapper> noDump = new ArrayList<>(); List<MessageWrapper> dump = new ArrayList<>(); for (MessageWrapper mw: mws) { if (mw.t == null) continue; if (mw.dump) dump.add(mw); else noDump.add(mw); } Set<String> createdIDs = new HashSet<>(); createdIDs.addAll(writeMessageBulkNoDump(noDump)); createdIDs.addAll(writeMessageBulkDump(dump)); // Does also do an writeMessageBulkNoDump internally return createdIDs; }   The above code snippet is from DAO.java, method calls writeMessageBulkNoDump(noDump) indexes the data to ElasticSearch. The definition of this method can be seen here Whereas for dumping of data writeMessageBulkDump(Dump) is called. It is defined here Resources: Iterable: https://docs.oracle.com/javase/8/docs/api/java/lang/Iterable.html Use of Iterable: https://stackoverflow.com/questions/1059127/what-is-the-iterable-interface-used-for ElasticSearch Webinar: https://www.elastic.co/webinars/getting-started-elasticsearch?elektra=home&storm=sub1 Ways to iterate through loop: https://crunchify.com/how-to-iterate-through-java-list-4-way-to-iterate-through-loop/

Continue ReadingData Indexing in Loklak Server

Some Other Services in Loklak Server

Loklak Server isn't just a scraper system software, it provides numerous other services to perform other interesting functions like Link Unshortening (reverse of link shortening) and video fetching and administrative tasks like status fetching of the Loklak deployment (for analysis in Loklak development use) and many more. Some of these are internally implemented and rest can be used through http endpoints. Also there are some services which aren't complete and are in development stage. Let's go through some of them to know a bit about them and how they can be used. 1) VideoUrlService This is the service to extract video from the website that has a streaming video and output the video file link. This service is in development stage and is functional. Presently, It can fetch twitter video links and output them with different video qualities. Endpoint: /api/videoUrlService.json Implementation Example: curl api/loklak.org/api/videoUrlService.json?id=https://twitter.com/EXOGlobal/status/886182766970257409&id=https://twitter.com/KMbappe/status/885963850708865025 2) Link Unshortening Service This is the service used to unshorten the link. There are shortened URLs which are used to track the Internet Users by Websites. To prevent this, link unshortening service unshortens the link and returns the final untrackable link to the user. Currently this service is in application in TwitterScraper to unshorten the fetched URLs. It has other methods to get Redirect Link and also a link to get final URL from multiple unshortened link. Implementation Example from TwitterScraper.java [LINK]: Matcher m = timeline_link_pattern.matcher(text); if (m.find()) { String expanded = RedirectUnshortener.unShorten(m.group(2)); text = m.replaceFirst(" " + expanded); continue; }   Further it can be used to as a service and can be used directly. New features like fetching featured image from links can be added to this service. Though these stuff are in discussion and enthusiastic contribution is most welcomed. 3) StatusService This is a service that outputs all data related to to Loklak Server deployment's configurations. To access this configuration, api endpoint status.json is used. It outputs the following data: a) About the number of messages it scrapes in an interval of a second, a minute, an hour, a day, etc. b) The configuration of the server like RAM, assigned memory, used memory, number of cores of CPU, cpu load, etc. c) And other configurations related to the application like size of ElasticSearch shards size and their specifications, client request header, number of running threads, etc. Endpoint: /api/status.json Implementation Example: curl api/loklak.org/api/status.json Resources: Code URL Shortener: https://stackoverflow.com/questions/742013/how-to-code-a-url-shortener URL Shortening-Hashing in Practice: https://blog.codinghorror.com/url-shortening-hashes-in-practice/ ElasticSearch: https://www.elastic.co/webinars/getting-started-elasticsearch?elektra=home&storm=sub1 M3U8 format: https://www.lifewire.com/m3u8-file-2621956 Fetch Video using PHP: https://stackoverflow.com/questions/10896233/how-can-i-retrieve-youtube-video-details-from-video-url-using-php  

Continue ReadingSome Other Services in Loklak Server

Using Elasticsearch Aggregations to Analyse Classifier Data in loklak Server

Loklak uses Elasticsearch to index Tweets and other social media entities. It also houses a classifier that classifies Tweets based on emotion, profanity and language. But earlier, this data was available only with the search API and there was no way to get aggregated data out of it. So as a part of my GSoC project, I proposed to introduce a new API endpoint which would allow users to access aggregated data from these classifiers. In this blog post, I will be discussing how aggregations are performed on the Elasticsearch index of Tweets in the loklak server. Structure of index The ES index for Twitter is called messages and it has 3 fields related to classifiers - classifier_emotion classifier_language classifier_profanity With each of these classifiers, we also have a probability attached which represents the confidence of the classifier for assigned class to a Tweet. The name of these fields is given by suffixing the emotion field by _probability (e.g. classifier_emotion_probability). Since I will also be discussing aggregation based on countries in this blog post, there is also a field named place_country_code which saves the ISO 3166-1 alpha-2 code for the country of creation of Tweet. Requesting aggregations using Elasticsearch Java API Elasticsearch comes with a simple Java API which can be used to perform any desired task. To work with data, we need an ES client which can be built from a ES Node (if creating a cluster) or directly as a transport client (if connecting remotely to a cluster) - // Transport client TransportClient tc = TransportClient.builder() .settings(mySettings) .build(); // From a node Node elasticsearchNode = NodeBuilder.nodeBuilder() .local(false).settings(mySettings) .node(); Client nc = elasticsearchNode.client(); [SOURCE] Once we have a client, we can use ES AggregationBuilder to get aggregations from an index - SearchResponse response = elasticsearchClient.prepareSearch(indexName) .setSearchType(SearchType.QUERY_THEN_FETCH) .setQuery(QueryBuilders.matchAllQuery()) // Consider every row .setFrom(0).setSize(0) // 0 offset, 0 result size (do not return any rows) .addAggregation(aggr) // aggr is a AggregatoinBuilder object .execute().actionGet(); // Execute and get results [SOURCE] AggregationBuilders are objects that define the properties of an aggregation task using ES’s Java API. This code snippet is applicable for any type of aggregation that we wish to perform on an index, given that we do not want to fetch any rows as a response. Performing simple aggregation for a classifier In this section, I will discuss the process to get results from a given classifier in loklak’s ES index. Here, we will be targeting a class-wise count of rows and stats (average and sum) of probabilities. Writing AggregationBuilder An AggregationBuilder for this task will be a Terms AggregationBuilder which would dynamically generate buckets for all the different values of fields for a given field in index - AggregationBuilder getClassifierAggregationBuilder(String classifierName) { String probabilityField = classifierName + "_probability"; return AggregationBuilders.terms("by_class").field(classifierName) .subAggregation( AggregationBuilders.avg("avg_probability").field(probabilityField) ) .subAggregation( AggregationBuilders.sum("sum_probability").field(probabilityField) ); } [SOURCE] Here, the name of aggregation is passed as by_class. This will be used while processing the results for this aggregation task. Also, sub-aggregation is used to get average and sum probability by the…

Continue ReadingUsing Elasticsearch Aggregations to Analyse Classifier Data in loklak Server

Lazy loading images in Loklak Search

Loklak Search delivers the media rich content to the users. Most of the media delivered to the users are in the form of images. In the earlier versions of loklak search, these images were delivered to the users imperatively, irrespective of their need. What this meant is, whether the image is required by the user or not it was delivered, consuming the bandwidth and slowing down the initial load of the app as large amount of data was required to be fetched before the page was ready. Also, the 404 errors were also not being handled, thus giving the feel of a broken UI. So we required a mechanism to control this loading process and tap into its various aspects, to handle the edge cases. This, on the whole, required few new Web Standard APIs to enable smooth working of this feature. These API’s are IntersectionObserver API Fetch API   As the details of this feature are involving and comprise of new API standards, I have divided this into two posts, one with the basics of the above mentioned API’s and the outline approach to the module and its subcomponents and the difficulties which we faced. The second post will mostly comprise of the details of the code which went into making this feature and how we tackled the corner cases in the path. Overview Our goal here was to create a component which can lazily load the images and provide UI feedback to the user in case of any load or parse error. As mentioned above the development of this feature depends on the relatively new web standards API’s so it’s important to understand the functioning of these AP’s we understand how they become the intrinsic part of our LazyImgComponent. Intersection Observer If we see history, the problem of intersection of two elements on the web in a performant way has been impossible since always, because it requires DOM polling constantly for the ClientRect the element which we want to check for intersection, as these operations run on main thread these kinds of polling has always been a source of bottlenecks in the application performance. The intersection observer API is a web standard to detect when two DOM elements intersect with each other. The intersection observer API helps us to configure a callback whenever an element called target intersects with another element (root) or viewport. To create an intersection observer is a simple task we just have to create a new instance of the observer. var observer = new IntersectionObserver(callback, options); Here the callback is the function to run whenever some observed element intersect with the element supplied to the observer. This element is configured in the options object passed to the Intersection Observer var options = { root: document.querySelector('#root'), // Defaults to viewport if null rootMargin: '0px', // The margin around root within which the callback is triggered threshold: 1.0 } The target element whose intersection is to be tested with the main element can be setup using…

Continue ReadingLazy loading images in Loklak Search

Updating Page Titles Dynamically in Loklak Search

Page titles are native in the web platform, and are prime ways to identify any page. The page titles have been in the web platform since ages. They tell the browsers, the web scrapers and search engines about the page content in 1-2 words. Since the titles are used for wide variety of things from presentation of the page, history references and most importantly by the search engines to categorise the pages, it becomes very important for any web application to update the title of the page appropriately. In earlier implementation of loklak search the page title was a constant and was not updated regularly and this was not a good from presentation and SEO perspective. Problem of page titles with SPA Since loklak search is a single page application, there are few differences in the page title implementation in comparison to a server served multi page application. In a server served multi page application, the whole application is divided into pages and the server knows what page it is serving thus can easily set the title of the page while rendering the template. A simple example will be a base django template which holds the place to have a title block for the application. <!-- base.html --> <title>{% block title %} Lokalk Search {% endblock %}</title> <!-- Other application blocks --> Now for any other page extending this base.html it is very simple to update the title block by simply replacing it with it’s own title. <!-- home.html --> {% extends ‘base.html’ %} {% block title %} Home Page - Loklak Search {% endblock %} <!-- Other page blocks --> When the above template is rendered by the templating engine it replaces the title block of the base.html with the updated title block specific to the page, thus for each page at the rendering time server is able to update the page title, appropriately. But in a SPA, the server just acts as REST endpoints, and all the templating is done at the client side. Thus in an SPA the page title never changes automatically, from the server, as only the client is in control of what page (route) it is showing. Thus it becomes the duty of the client side to update the title of the page, appropriately, and thus this issue of static non informative page titles is often overlooked. Updating page titles in Loklak Search Before being able to start solving the issue of updating the page titles it is certainly very important to understand what all are the points of change in the application where we need to update the page title. Whenever the route in the application changes. Whenever new query is fetched from the server. These two are the most important places where we definitely want to update the titles. The way we achieved is using the Angular Title Service. The title service is a platform-browser service by angular which abstracts the workflow to achieve the title updation. There are are two main…

Continue ReadingUpdating Page Titles Dynamically in Loklak Search

Search Engine Optimization and Meta Tags in Loklak Search

Ranking higher in search results is very important for any website’s, productivity and reach. These days modern search engines use algorithms and scrape the sites for relevant data points, then these data points are processed to result in a relevance number aka the page ranking. Though these days the algorithms which search engines use are very advanced and are able to generate the context of the page by the page content, but still there are some key points which should be followed by the developers to enable a higher page ranking on search results along with better presentation of search results on the pages. Loklak Search is also a web application thus it is very important to get these crucial points correct. So the first thing which search engines see on a website is their landing or index page, this page gives information to the search engine what the data is about in this site and then search engines follow links to to crawl the details of the page. So the landing page should be able to provide the exact context for the page. The page can provide the context using the meta tags which store the metadata information about the page, which can be used by the search engines and social sites Meta Charset The first and foremost important tag is the charset tag. It specifies the character set being used on the page and declares the page's character encoding which is used by browsers encoding algorithm. It very important to determine a correct and compatible charset for security and presentation reasons. Thus the most widely used charset UTF-8 is used in loklak search. <meta charset="utf-8"> Meta Viewport The mobile browsers often load the page in a representative viewport which is usually larger than the actual screen of the device so that the content of the screen is not crammed into small space. This makes the user to pan-zoom around the page to reach the desired content. But this approach often undesirable design for the most of the mobile websites. For this at loklak search we use <meta name="viewport" content="width=device-width, initial-scale=1">   This specifies the relation between CSS pixel and device pixel. Here the relationship is actually computed by the browser itself, but this meta tag says that the width for calculating that ratio should be equal to device width and the initial-scale should be 1 (or zoom should be 0). Meta Description The meta description tag is the most important tad for the SEO as this is the description which is used by search engines and social media sites while showing the description of the page <meta name="description"    content="Search social media on Loklak Search. Our mission is to make the world’s social media information openly accessible and useful generating open knowledge for all">   This is how this description tag is used by google on the Google Search Social Media meta tags The social media meta tags are important for the presentation of the content of the page…

Continue ReadingSearch Engine Optimization and Meta Tags in Loklak Search

Making loklak Server’s Kaizen Harvester Extendable

Harvesting strategies in loklak are something that the end users can’t see, but they play a vital role in deciding the way in which messages are collected with loklak. One of the strategies in loklak is defined by the Kaizen Harvester, which generates queries from collected messages. The original strategy used a simple hash queue which drops queries once it is full. This effect is not desirable as we tend to lose important queries in this process if they come up late while harvesting. To overcome this behaviour without losing important search queries, we needed to come up with new harvesting strategy(ies) that would provide a better approach for harvesting. In this blog post, I am discussing the changes made in the kaizen harvester so it can be extended to create different flavors of harvesters. What can be different in extended harvesters? To make the Kaizen harvester extendable, we first needed to decide that what are the parts in the original Kaizen harvester that can be changed to make the strategy different (and probably better). Since one of the most crucial part of the Kaizen harvester was the way it stores queries to be processed, it was one of the most obvious things to change. Another thing that should be allowed to configure across various strategies was the decision of whether to go for harvesting the queries from the query list. Query storage with KaizenQueries To allow different methods of storing the queries, KaizenQueries class was introduced in loklak. It was configured to provide basic methods that would be required for a query storing technique to work. A query storing technique can be any data structure that we can use to store search queries for Kaizen harvester. public abstract class KaizenQueries { public abstract boolean addQuery(String query); public abstract String getQuery(); public abstract int getSize(); public abstract int getMaxSize(); public boolean isEmpty() { return this.getSize() == 0; } } [SOURCE] Also, a default type of KaizenQueries was introduced to use in the original Kaizen harvester. This allowed the same interface as the original queue which was used in the harvester. Another constructor was introduced in Kaizen harvester which allowed setting the KaizenQueries for an instance of its derived classes. It solved the problem of providing an interface of KaizenQueries inside the Kaizen harvester which can be used by any inherited strategy - private KaizenQueries queries = null; public KaizenHarvester(KaizenQueries queries) { ... this.queries = queries; ... } public void someMethod() { ... this.queries.addQuery(someQuery); ... } [SOURCE] This being added, getting new queries or adding new queries was a simple. We just need to use getQuery() and addQuery() methods without worrying about the internal implementations. Configurable decision for harvesting As mentioned earlier, the decision taken for harvesting should also be configurable. For this, a protected method was implemented and used in harvest() method - protected boolean shallHarvest() { float targetProb = random.nextFloat(); float prob = 0.5F; if (this.queries.getMaxSize() > 0) { prob = queries.getSize() / (float)queries.getMaxSize(); } return !this.queries.isEmpty() && targetProb…

Continue ReadingMaking loklak Server’s Kaizen Harvester Extendable

Accessing Child Component’s API in Loklak Search

Loklak search being an angular application, comprises of components. Components provide us a way to organize the application in a more consistent way, along with providing the ability to reuse code in the application. Each component has two type of API’s public and private. Public API is the API which it exposes to the outer world for manipulating the working of the component, while private API is something which is local to the component and cannot be directly accessed by the outside world. Now when this distinction between the two is clear, it is important to state the need of these API’s, and why are they required in loklak search. The components can never live in isolation, i.e. they have to communicate with their parent to be able to function properly. Same is the case with components of loklak search. They have to interact with others to make the application work. So how this, interaction looks like, The rule of thumb here is, data flows down, events flow up. This is the core idea of all the SPA frameworks of modern times, unidirectional data flow, and these interactions can be seen everywhere in loklak search. <feed-header    [query]="query"    (searchEvent)="doSearch($event)"></feed-header> This is how a simple component’s API looks in loklak search. Here our component is FeedHeader and it exposes some of it’s API as inputs and outputs. export class FeedHeaderComponent {  @Input() query: string;  @Output() searchEvent: EventEmitter<string> = new EventEmitter<string>();   // Other methods and properties of the component } The FeedHeaderComponent ‘s class defines some inputs which it takes. These inputs are the data given to the component. Here the input is a simple query property, and the parent at the time of instantiating the component, passes the value to it’s child as [query]="query". This enables the one direction of API, from parent to child. Now, we also need a way for parent to be able to events generated by the child on interaction with user. For example, here we need to have a way to tell the parent to perform a search whenever user presses search button. For this the Output property searchEvent is used. The search event can be emitted by the child component independently. While the parent, if it wants to listen to child components simply do so by binding to the event and running a corresponding function whenever event is emitted (searchEvent)="doSearch($event)". Here the event which parent listens to is searchEvent and whenever such an event is emitted by the child a function doSearch is run by the parent. Thus this completes the event flow, from child to parent. Now it is worth noticing that all these inputs for data and outputs for events is provided by the child component itself. They are the API of the child and parent’s job is just to bind to these inputs and outputs to bind to data and listen to events. This allows the component interactions in both directions. @ViewChild and triggering child’s methods The inputs are important to carry data…

Continue ReadingAccessing Child Component’s API in Loklak Search

Posting Scraped Tweets to Loklak server from Loklak Wok Android

Loklak Wok Android is a peer harvester that posts collected  messages to the Loklak Server. The suggestions to search tweets are fetched using suggest API endpoint. Using the suggestion queries, tweets are scraped. The scraped tweets are shown in a RecyclerView and simultaneously they are posted to loklak server using push API endpoint. Let’s see how this is implemented. Adding Dependencies to the project This feature heavily uses Retrofit2, Reactive extensions(RxJava2, RxAndroid and Retrofit RxJava adapter) and RetroLambda (for Java lambda support in Android). In app/build.gradle: apply plugin: 'com.android.application' apply plugin: 'me.tatarka.retrolambda' android { ... packagingOptions { exclude 'META-INF/rxjava.properties' } } dependencies { ... compile 'com.google.code.gson:gson:2.8.1' compile 'com.squareup.retrofit2:retrofit:2.3.0' compile 'com.squareup.retrofit2:converter-gson:2.3.0' compile 'com.squareup.retrofit2:adapter-rxjava2:2.3.0' compile 'io.reactivex.rxjava2:rxjava:2.0.5' compile 'io.reactivex.rxjava2:rxandroid:2.0.1' }   In build.gradle project level: dependencies { classpath 'com.android.tools.build:gradle:2.3.3' classpath 'me.tatarka:gradle-retrolambda:3.2.0' }   Implementation The suggest and push API endpoint is defined in LoklakApi interface public interface LoklakApi { @GET("/api/suggest.json") Observable<SuggestData> getSuggestions(@Query("q") String query, @Query("count") int count); @POST("/api/push.json") @FormUrlEncoded Observable<Push> pushTweetsToLoklak(@Field("data") String data); }   The POJOs (Plain Old Java Objects) for suggestions and posting tweets are obtained using jsonschema2pojo, Gson uses POJOs to convert JSON to Java objects. The REST client is created by Retrofit2 and is implemented in RestClient class. The Gson converter and RxJava adapter for retrofit is added in the retrofit builder. create method is called to generate the API methods(retrofit implements LoklakApi Interface). public class RestClient { private RestClient() { } private static void createRestClient() { sRetrofit = new Retrofit.Builder() .baseUrl(BASE_URL) // gson converter .addConverterFactory(GsonConverterFactory.create(gson)) // retrofit adapter for rxjava .addCallAdapterFactory(RxJava2CallAdapterFactory.create()) .build(); } private static Retrofit getRetrofitInstance() { if (sRetrofit == null) { createRestClient(); } return sRetrofit; } public static <T> T createApi(Class<T> apiInterface) { // create method to generate API methods return getRetrofitInstance().create(apiInterface); } }   The suggestions are fetched by calling getSuggestions after LoklakApi interface is implemented. getSuggestions returns an Observable of type SuggestData, which contains the suggestions in a List. For scraping tweets only a single query needs to be passed to LiquidCore, so flatmap is used to transform the observabe and then fromIterable operator is used to emit single queries as string to LiquidCore which then scrapes tweets, as implemented in fetchSuggestions private Observable<String> fetchSuggestions() { LoklakApi loklakApi = RestClient.createApi(LoklakApi.class); Observable<SuggestData> observable = loklakApi.getSuggestions("", 2); return observable.flatMap(suggestData -> { List<Query> queryList = suggestData.getQueries(); List<String> queries = new ArrayList<>(); for (Query query : queryList) { queries.add(query.getQuery()); } return Observable.fromIterable(queries); }); }   As LiquidCore uses callbacks to create a connection between NodeJS instance and Android, to maintain a flow of observables a custom observable is created using create operator which encapsulates the callbacks inside it. For a detail understanding of how LiquidCore event handling works, please go through the example. The way it is implemented in getScrapedTweets: private Observable<ScrapedData> getScrapedTweets(final String query) { final String LC_TWITTER_URI = "android.resource://org.loklak.android.wok/raw/twitter"; URI uri = URI.create(LC_TWITTER_URI); return Observable.create(emitter -> { // custom observable creation EventListener startEventListener = (service, event, payload) -> { service.emit(LC_QUERY_EVENT, query); service.emit(LC_FETCH_TWEETS_EVENT); }; EventListener getTweetsEventListener = (service, event, payload) -> { ScrapedData scrapedData = mGson.fromJson(payload.toString(), ScrapedData.class);…

Continue ReadingPosting Scraped Tweets to Loklak server from Loklak Wok Android

Realm database in Loklak Wok Android for Persistent view

Loklak Wok Android provides suggestions for tweet searches. The suggestions are stored in local database to provide a persistent view, resulting in a better user experience. The local database used here is Realm database instead of sqlite3 which is supported by Android SDK. The proper way to use an sqlite3 database is to first create a contract where the schema of the database is defined, then a database helper class which extends from SQLiteOpenHelper class where the schema is created i.e. tables are created and finally write ContentProvider so that you don’t have to write long SQL queries every time a database operation needs to be performed. This is just a lot of hard work to do, as this includes a lot of steps, debugging is also difficult. A solution to this can be using an ORM that provides a simple API to use sqlite3, but the currently available ORMs lack in terms of performance, they are too slow. A reliable solution to this problem is realm database, which is faster than raw sqlite3 and has really simple API for database operations. This blog explains the use of realm database for storing tweet search suggestions. Adding Realm database to Android project In project level build.gradle buildscript { repositories { jcenter() } dependencies { classpath 'com.android.tools.build:gradle:2.3.3' classpath "io.realm:realm-gradle-plugin:3.3.1" // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } }   And at the top of app/build.gradle "apply plugin: 'realm-android'"  is added. Using Realm Database Let’s start with a simple example. We have a Student class that has only two attributes name and age. To create the model for the database, the Student class is simply extended to RealmObject. public class Student extends RealmObject { private String name; private int age; // A constructor needs to be explicitly defined, be it an empty constructor public Student(String name, int age) { this.name = name; this.age = age; } // getters and setters }   To push data to the database, Java objects are created, a transaction is initialized, then copyToRealm method is used to push the data and finally the transaction is committed. But before all this, the database is initialized and a Realm instance is obtained. Realm.init(context); // Database initialized Realm realm = Realm.getDefaultInstance(); // realm instance obtained Student student = new Student("Rahul Dravid", 22); // Simple java object created realm.beginTransaction() // initialization of transaction realm.copyToRealm(student); // pushed to database realm.commitTransaction(); // transaction committed   copyToRealm takes only a single parameter, the parameter can be an object or an Iterable. Off course, the passed parameter should extend RealmObject. A List of Student can be passed as a parameter to copyToRealm to push multiple data into the database. The above way of inserting data is synchronous. Realm also supports asynchronous transactions, you guessed it right, you don’t have to depend on AsyncTaskLoader. The same operation can be performed asynchronously as realm.executeTransaction(new Realm.Transaction() { @Override public void execute(Realm realm) { Student student = new Student("Rahul Dravid",…

Continue ReadingRealm database in Loklak Wok Android for Persistent view