Deploy to Azure Button for loklak

In this blog post, am going to tell you about yet a new deployment method for loklak which is easy and quick with just one click. Deploying to Azure Websites from a Git repository just got a little easier with the Deploy to Azure Button. Simply place the button in README.md with a link to the loklak, and users who click on it will be directed to a streamlined deployment process. If we want to do something more advanced and customize this behavior, then add an ARM template called “azuredeploy.json” at the root of the repository which will cause users to be presented with different inputs and configure your services as specified. I’m going to walk you through a workflow that I used to test them before checking them in to my repo, as well as describe some of the special behaviors that the “Deploy to Azure” site does Adding a button To add a deployment button, insert the following markdown to your README.md file: [![Deploy to Azure](https://azuredeploy.net/deploybutton.svg)](https://deploy.azure.com/?repository=https://github.com/loklak/loklak_server) How it works When a user clicks on the button, a “referrer” header is sent to azuredeploy.net which contains the location of the Git repository of loklak_server to deploy from. An Example Template This is a blank template which shows, how the azure divides its inputs. { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { }, "variables": { }, "resources": [ ], "outputs": { } } By following the above template, in the case of loklak server, the parameters used are name, image i.e., docker image, port , number of CPUs to be utilized and space i.e., memory required. In the resources section we use container, the type of the container will be "type": "Microsoft.ContainerInstance/containerGroups",   And as output, we expect a public IP address to access the azure cloud instance created by us. Everything under the root “parameters” property will be inputs into our template. Then these parameter values feed into the resources defined later in the template with the “[parameters(‘paramName’)]” syntax. Try the “Deploy to Azure” Button here: Resources Different azure templates available here: https://github.com/Azure/azure-quickstart-templates More about Deploy to Azure button: https://www.microsoft.com/developerblog/2017/01/17/the-deploy-to-azure-button/

Continue ReadingDeploy to Azure Button for loklak

Feeds Moderation in loklak Media Wall

Loklak Media Wall provides client side filters for entities received from loklak status.json API like blocking feeds from a particular user, removing duplicate feeds, hiding a particular feed post for moderating feeds. To implement it, we need pure functions which remove the requested type of feeds and returns a new array of feeds. Moreover, the original set of data must also be stored in an array so that if filters are removed, the requested data is provided to the user In this blog, I would be explaining how I implemented client side filters to filter out a particular type of feeds and provide the user with a cleaner data as requested. Types of filters There are four client-side filters currently provided by Loklak media wall: Profanity Filter: Checks for the feeds that might be offensive and removes it. Remove Duplicate: Removes duplicate feeds and the retweets from the original feeds Hide Feed: Removes a particular feed from the feeds Block User: Blocks a User and removes all the feeds from the particular user It is also important to ensure that on pagination, new feeds are filtered out based on the previous user requested moderation. Flow Chart The flow chart explains how different entities received from the server is filtered and how original set of entities is maintained so that if the user removes the filter, the original filtered entities are recovered. Working Profanity Filter To avoid any obscene language used in the feed status to be shown up on media wall and providing a rather clean data, profanity filter can be used. For this filter, loklak search.json APIs provide a field classifier_profanity which states if there is some swear word is used in the status. We can check for the value of this field and filter out the feed accordingly. export function profanityFilter(feeds: ApiResponseResult[]): ApiResponseResult[] { const filteredFeeds: ApiResponseResult[] = []; feeds.forEach((feed) => { if ( feed.classifier_language !== null && feed.classifier_profanity !== undefined ) { if (feed.classifier_profanity !== 'sex' &&  feed.classifier_profanity !== 'swear') { filteredFeeds.push(feed); } } else { filteredFeeds.push(feed); } }); return filteredFeeds || feeds; } Here, we check if the classifier_profanity field is either not ‘swear’ or ‘sex’ which clearly classifies the feeds and we can push the status accordingly. Moreover, if no classifier_profanity field is provided for a particular field, we can push the feed in the filtered feeds. Remove Duplicate Remove duplicate filter removes the tweets that are either retweets or even copy of some feed and return just one original feed. We need to compare field id_str which is the status id of the feed and remove the duplicate feeds. For this filter, we need to create a map and compare feeds on map object and remove the duplicate feeds iteratively and return the array of feeds with unique elements. export function removeDuplicateCheck(feeds: ApiResponseResult[]): ApiResponseResult[] { const map = { }; const filteredFeeds: ApiResponseResult[] = []; const newFeeds: ApiResponseResult[] = feeds; let v: string; for (let a = 0; a < feeds.length; a++) { v…

Continue ReadingFeeds Moderation in loklak Media Wall

Reactive Side Effects of Actions in Loklak Search

In a Redux based application, every component of the application is state driven. Redux based applications manage state in a predictable way, using a centralized Store, and Reducers to manipulate various aspects of the state. Each reducer controls a specific part of the state and this allows us to write the code which is testable, and state is shared between the components in a stable way, ie. there are no undesired mutations to the state from any components. This undesired mutation of the shared state is prevented by using a set of predefined functions called reducers which are central to the system and updates the state in a predictable way. These reducers to update the state require some sort triggers to run. This blog post concentrates on these triggers, and how in turn these triggers get chained to form a Reactive Chaining of events which occur in a predictable way, and how this technique is used in latest application structure of Loklak Search. In any state based asynchronous application, like, Loklak Search the main issue with state management is to handle the asynchronous action streams in a predictable manner and to chain asynchronous events one after the other.  The technique of reactive action chaining solves the problem of dealing with asynchronous data streams in a predictable and manageable manner. Overview Actions are the triggers for the reducers, each redux action consists of a type and an optional payload. Type of the action is like its ID which should be purposely unique in the application. Each reducer function takes the current state which it controls and action which is dispatched. The reducer decides whether it needs to react to that action or not. If the user reacts to the action, it modifies the state according to the action payload and returns the modified state, else, it returns the original state. So at the core, the actions are like the triggers in the application, which make one or more reducers to work. This is the basic architecture of any redux application. The actions are the triggers and reducers are the state maintainers and modifiers. The only way to modify the state is via a reducer, and a reducer only runs when a corresponding action is dispatched. Now, who dispatches these actions? This question is very important. The Actions can be technically dispatched from anywhere in the application, from components, from services, from directives, from pipes etc. But we almost in every situation will always want the action to be dispatched by the component. Component who wishes to modify the state dispatch the corresponding actions. Reactive Effects If the components are the one who dispatch the action, which triggers a reducer function which modifies the state, then what are these effects, cause the cycle of events seem pretty much complete. The Effects are the Side Effects, of a particular action. The term “side effect” means these are the piece of code which runs whenever an action is dispatched. Don’t confuse them with the…

Continue ReadingReactive Side Effects of Actions in Loklak Search

Enabling Google App Signing for Android Project

Signing key management of Android Apps is a hectic procedure and can grow out of hand rather quickly for large organizations with several independent projects. We, at FOSSASIA also had to face similar difficulties in management of individual keys by project maintainers and wanted to gather all these Android Projects under singular key management platform: Phimp.me Pocket Science Lab loklak wok Open Event Android and sample apps eventyay Organizer App Ask SUSI.AI To handle the complexities and security aspect of the process, this year Google announced App Signing optional program where Google takes your existing key’s encrypted file and stores it on their servers and asks you to create a new upload key which will be used to sign further updates of the app. It takes the certificates of your new upload key and maps it to the managed private key. Now, whenever there is a new upload of the app, it’s signing certificate is matched with the upload key certificate and after verification, the app is signed by the original private key on the server itself and delivered to the user. The advantage comes where you lose your key, its password or it is compromised. Before App Signing program, if your key got lost, you had to launch your app under a new package name, losing your existing user base. With Google managing your key, if you lose your upload key, then the account owner can request Google to reassign a new upload key as the private key is secure on their servers. There is no difference in the delivered app from the previous one as it is still finally signed by the original private key as it was before, except that Google also optimizes the app by splitting it into multiple APKs according to hardware, demographic and other factors, resulting in a much smaller app! This blog will take you through the steps in how to enable the program for existing and new apps. A bit of a warning though, for security reasons, opting in the program is permanent and once you do it, it is not possible to back out, so think it through before committing. For existing apps: First you need to go to the particular app’s detail section and then into Release Management > App Releases. There you would see the Get Started button for App Signing. The account owner must first agree to its terms and conditions and once it's done, a page like this will be presented with information about app signing infrastructure at top. So, as per the instructions, download the PEPK jar file to encrypt your private key. For this process, you need to have your existing private key and its alias and password. It is fine if you don’t know the key password but store password is needed to generate the encrypted file. Then execute this command in the terminal as written in Step 2 of your Play console: java -jar pepk.jar --keystore={{keystore_path}} --alias={{alias}} --output={{encrypted_file_output_path}} --encryptionkey=eb10fe8f7c7c9df715022017b00c6471f8ba8170b13049a11e6c09ffe3056a104a3bbe4ac5a955f4ba4fe93fc8cef27558a3eb9d2a529a2092761fb833b656cd48b9de6a You will have to…

Continue ReadingEnabling Google App Signing for Android Project

Introducing Stream Servlet in loklak Server

A major part of my GSoC proposal was adding stream API to loklak server. In a previous blog post, I discussed the addition of Mosquitto as a message broker for MQTT streaming. After testing this service for a few days and some minor improvements, I was in a position to expose the stream to outside users using a simple API. In this blog post, I will be discussing the addition of /api/stream.json endpoint to loklak server. HTTP Server-Sent Events Server-sent events (SSE) is a technology where a browser receives automatic updates from a server via HTTP connection. The Server-Sent Events EventSource API is standardized as part of HTML5 by the W3C. - Wikipedia This API is supported by all major browsers except Microsoft Edge. For loklak, the plan was to use this event system to send messages, as they arrive, to the connected users. Apart from browser support, EventSource API can also be used with many other technologies too. Jetty Eventsource Plugin For Java, we can use Jetty’s EventSource plugin to send events to clients. It is similar to other Jetty servlets when it comes to processing the arguments, handling requests, etc. But it provides a simple interface to send events as they occur to connected users. Adding Dependency To use this plugin, we can add the following line to Gradle dependencies - compile group: 'org.eclipse.jetty', name: 'jetty-eventsource-servlet', version: '1.0.0' [SOURCE] The Event Source An EventSource is the object which is required for EventSourceServlet to send events. All the logics for emitting events needs to be defined in the related class. To link a servlet with an EventSource, we need to override the newEventSource method - public class StreamServlet extends EventSourceServlet { @Override protected EventSource newEventSource(HttpServletRequest request) { String channel = request.getParameter("channel"); if (channel == null) { return null; } if (channel.isEmpty()) { return null; } return new MqttEventSource(channel); } } [SOURCE] If no channel is provided, the EventSource object will be null and the request will be rejected. Here, the MqttEventSource would be used to handle the stream of Tweets as they arrive from the Mosquitto message broker. Cross Site Requests Since the requests to this endpoint can’t be of JSONP type, it is necessary to allow cross site requests on this endpoint. This can be done by overriding the doGet method of the servlet - @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setHeader("Access-Control-Allow-Origin", "*"); super.doGet(request, response); } [SOURCE] Adding MQTT Subscriber When a request for events arrives, the constructor to MqttEventSource is called. At this stage, we need to connect to the stream from Mosquitto for the channel. To achieve this, we can set the class as MqttCallback using appropriate client configurations - public class MqttEventSource implements MqttCallback { ... MqttEventSource(String channel) { this.channel = channel; } ... this.mqttClient = new MqttClient(address, "loklak_server_subscriber"); this.mqttClient.connect(); this.mqttClient.setCallback(this); this.mqttClient.subscribe(this.channel); ... } [SOURCE] By setting the callback to this, we can override the messageArrived method to handle the arrival of a new message on the channel. Just to…

Continue ReadingIntroducing Stream Servlet in loklak Server

Adding download feature to LoklakWordCloud app on Loklak apps site

One of the most important and useful feature that has recently been added to LoklakWordCloud app is enabling the user to download the generated word cloud as a png/jpeg image. This feature will allow the user to actually use this app as a tool to generate a word cloud using twitter data and save it on their disks for future use. All that the user needs to do is generate the word cloud, choose an image type (png or jpeg) and click on export as image, a preview of the image to be downloaded will be displayed. Just hit enter and the word cloud will be saved on your disk. Thus users will not have to use any alternative process like taking a screenshot of the word cloud generated, etc. Presently the complete app is hosted on Loklak apps site. How does it work? What we are doing is, we are exporting a part of the page (a div) as image and saving it. Apparently it might seem that we are taking a screenshot of a particular portion of a page and generating a download link. But actually it is not like that. The word cloud that is being generated by this app via Jqcloud is actually a collection of HTML nodes. Each node contains a word (part of the cloud) as a text content with some CSS styles to specify the size and color of that word. As user clicks on export to image option, the app traverses the div containing the cloud. It collects information about all the HTML nodes present under that div and creates a canvas representation of the entire div. So rather than taking a screenshot of the div, the app recreates the entire div and presents it to us. This entire process is accomplished by a lightweight JS library called html2canvas. Let us have a look into the code that implements the download feature. At first we need to create the UI for the export and download option. User should be able to choose between png and jpeg before exporting to image. For this we have provided a dropdown containing the two options. <div class="dropdown type" ng-if="download"> <div class="dropdown-toggle select-type" data-toggle="dropdown"> {{imageType}} <span class="caret"></span></div> <ul class="dropdown-menu"> <li ng-click="changeType('png', 'png')"><a href="">png</a></li> <li ng-click="changeType('jpeg', 'jpg')"><a href="">jpeg</a></li> </ul> </div> <a class="export" ng-click="export()" ng-if="download">Export as image</a> In the above code snippet, firstly we create a dropdown menu with two list items, png and jpeg. With each each list item we attach a ng-click event which calls changeType function and passes two parameters, image type and extension. The changeType function simply updates the current image type and extension with the selected ones. $scope.changeType = function(type, ext) { $scope.imageType = type; $scope.imageExt = ext; } The ‘export as image’ on clicking calls the export function. The export function uses html2canvas library’s interface to generate the canvas representation of the word cloud and also generates the download link and attaches it to the modal’s save button (described below). After everything is…

Continue ReadingAdding download feature to LoklakWordCloud app on Loklak apps site

Backend Scraping in Loklak Server

Loklak Server is a peer-to-peer Distributed Scraping System. It scrapes data from websites and also maintain other sources like peers, storage and a backend server to scrape data. Maintaining different sources has it's benefits of not engaging in costly requests to the websites, no scraping of data and no cleaning of data. Loklak Server can maintain a secondary Loklak Server (or a backend server) tuned for storing large amount of data. This enables the primary Loklak Server fetch data in return of pushing all scraped data to the backend. Lately there was a bug in backend search as a new feature of filtering tweets was added to scraping and indexing, but not for backend search. To fix this issue, I had backtracked the backend search codebase and fix it. Let us discuss how Backend Search works:- 1) When query is made from search endpoint with: a) source=all When source is set to all. The first TwitterScraper and Messages from local search server is preferred. If the messages scraped are not enough or no output has been returned for a specific amount of time, then, backend search is initiated b) source=backend SearchServlet specifically scrapes directly from backend server. 2) Fetching data from Backend Server The input parameters fetched from the client is feeded into DAO.searchBackend method. The list of backend servers fetched from config file. Now using these input parameters and backend servers, the required data is scraped and output to the client. In DAO.searchOnOtherPeers method. the request is sent to multiple servers and they are arranged in order of better response rates. This method invokes SearchServlet.search method for sending request to the mentioned servers. List<String> remote = getBackendPeers(); if (remote.size() > 0) { // condition deactivated because we need always at least one peer Timeline tt = searchOnOtherPeers(remote, q, filterList, order, count, timezoneOffset, where, SearchServlet.backend_hash, timeout); if (tt != null) tt.writeToIndex(); return tt; }   3) Creation of request url and sending requests The request url is created according to the input parameters passed to SearchServlet.search method. Here the search url is created according to input parameters and request is sent to the respective servers to fetch the required messages. // URL creation urlstring = protocolhostportstub + "/api/search.json?q=" + URLEncoder.encode(query.replace(' ', '+'), "UTF-8") + "&timezoneOffset=" + timezoneOffset + "&maximumRecords=" + count + "&source=" + (source == null ? "all" : source) + "&minified=true&shortlink=false&timeout=" + timeout; if(!"".equals(filterString = String.join(", ", filterList))) { urlstring = urlstring + "&filter=" + filterString; } // Download data byte[] jsonb = ClientConnection.downloadPeer(urlstring); if (jsonb == null || jsonb.length == 0) throw new IOException("empty content from " + protocolhostportstub); String jsons = UTF8.String(jsonb); JSONObject json = new JSONObject(jsons); if (json == null || json.length() == 0) return tl; // Final data fetched to be returned JSONArray statuses = json.getJSONArray("statuses"); References Social peer-to-peer processes: https://en.wikipedia.org/wiki/Social_peer-to-peer_processes Parallel Random-Access Machine:  http://pages.cs.wisc.edu/~tvrdik/2/html/Section2.html Distributed Algorithm (Cole–Vishkin algorithm): http://homepage.divms.uiowa.edu/~ghosh/color.pdf

Continue ReadingBackend Scraping in Loklak Server

Configuring Youtube Scraper with Search Endpoint in Loklak Server

Youtube Scraper is one of the interesting web scrapers of Loklak Server with unique implementation of its data scraping and data key creation (using RDF). It couldn't be accessed as it didn't have any url endpoint. I configured it to use both as separate endpoint (api/youtubescraper) and search endpoint (/api/search.json). Usage: YoutubeScraper Endpoint: /api/youtubescraperExample:http://api.loklak.org/api/youtubescraper?query=https://www.youtube.com/watch?v=xZ-m55K3FhQ&scraper=youtube SearchServlet Endpoint: /api/search.json Example: http://api.loklak.org/api/search.json?query=https://www.youtube.com/watch?v=xZ-m55K3FhQ&scraper=youtube The configurations added in Loklak Server are:- 1) Endpoint We can access YoutubeScraper using endpoint /api/youtubescraper endpoint. Like other scrapers, I have used BaseScraper class as superclass for this functionality . 2) PrepareSearchUrl The prepareSearchUrl method creates youtube search url that is used to scrape Youtube webpage. YoutubeScraper takes url as input. But youtube link could also be a shortened link. That is why, the video id is stored as query. This approach optimizes the scraper and adds the capability to add more scrapers to it. Currently YoutubeScraper scrapes the video webpages of Youtube, but scrapers for search webpage and channel webpages can also be added. URIBuilder url = null; String midUrl = "search/"; try { switch(type) { case "search": midUrl = "search/"; url = new URIBuilder(this.baseUrl + midUrl); url.addParameter("search_query", this.query); break; case "video": midUrl = "watch/"; url = new URIBuilder(this.baseUrl + midUrl); url.addParameter("v", this.query); break; case "user": midUrl = "channel/"; url = new URIBuilder(this.baseUrl + midUrl + this.query); break; default: url = new URIBuilder(""); break; } } catch (URISyntaxException e) { DAO.log("Invalid Url: baseUrl = " + this.baseUrl + ", mid-URL = " + midUrl + "query = " + this.query + "type = " + type); return ""; }   3) Get-Data-From-Connection The getDataFromConnection method is used to fetch Bufferedreader object and input it to scrape method. In YoutubeScraper, this method has been overrided to prevent using default method implementation i.e. use type=all @Override public Post getDataFromConnection() throws IOException { String url = this.prepareSearchUrl(this.type); return getDataFromConnection(url, this.type); }   4) Set scraper parameters input as get-parameters The Map data-structure of get-parameters fetched by scraper fetches type and query. For URL, the video hash-code is separated from url and then used as query. this.query = this.getExtraValue("query"); this.query = this.query.substring(this.query.length() - 11);   5) Scrape Method Scrape method runs the different scraper methods (in YoutubeScraper, there is only one), iterate it using PostTimeline and wraps in Post object to the output. This simple function can improve flexibility of scraper to scrape different pages concurrently. Post out = new Post(true); Timeline2 postList = new Timeline2(this.order); postList.addPost(this.parseVideo(br, type, url)); out.put("videos", postList.toArray());   References What is an RDF triple explained on Stackoverflow: https://stackoverflow.com/questions/273218/whats-a-rdf-triple Tutorial on Scraping with Regular Expressions: http://stanford.edu/~mgorkove/cgi-bin/rpython_tutorials/Scraping_PDFsText_Files_in_Python_Using_Regular_Expressions.php Youtube Video-Id Format: https://webapps.stackexchange.com/questions/54443/format-for-id-of-youtube-video

Continue ReadingConfiguring Youtube Scraper with Search Endpoint in Loklak Server

Live Feeds in loklak Media wall using ‘source=twitter’

Loklak Server provides pagination to provide tweets from Loklak search.json API in divisions so as to improve response time from the server. We will be taking advantage of this pagination using parameter `source=twitter` of the search.json API on loklak media wall. Basically, using parameter ‘source=twitter’ in the API does real time scraping and provides live feeds. To improve response time, it returns feeds as specified in the count (default is 100). In the blog, I am explaining how implemented real time pagination using ‘source = twitter’ in loklak media wall to get live feeds from twitter. Working First API Call on Initialization The first API call needs to have high count (i.e. maximumRecords = 20) so as to get a higher number of feeds and provide a sufficient amount of feeds to fill up the media wall. ‘source=twitter’ must be specified so that real time feeds are scraped and provided from twitter. http://api.loklak.org/api/search.json?q=fossasia&callback=__ng_jsonp__.__req0.finished&minified=true&source=twitter&maximumRecords=20&timezoneOffset=-330&startRecord=1   If feeds are received from the server, then the next API request must be sent after 10 seconds so that server gets sufficient time to scrap the data and store it in the database. This can be done by an effect which dispatches WallNextPageAction(‘’) keeping debounceTime equal to 10000 so that next request is sent 10 seconds after WallSearchCompleteSuccessAction(). @Effect() nextWallSearchAction$ = this.actions$ .ofType(apiAction.ActionTypes.WALL_SEARCH_COMPLETE_SUCCESS) .debounceTime(10000) .withLatestFrom(this.store$) .map(([action, state]) => { return new wallPaginationAction.WallNextPageAction(''); }); Consecutive Calls To implement pagination, next consecutive API call must be made to add new live feeds to the media wall. For the new feeds, count must be kept low so that no heavy pagination takes place and feeds are added one by one to get more focus on new tweets. For this purpose, count must be kept to one. this.searchServiceConfig.count = queryObject.count; this.searchServiceConfig.maximumRecords = queryObject.count;return this.apiSearchService.fetchQuery(queryObject.query.queryString, this.searchServiceConfig) .takeUntil(nextSearch$) .map(response => { return new wallPaginationAction.WallPaginationCompleteSuccessAction(response); }) .catch(() => of(new wallPaginationAction.WallPaginationCompleteFailAction(''))); });   Here, count and maximumRecords is updated from queryObject.count which varies between 1 to 5 (default being 1). This can be updated by user from the customization menu. Next API request is as follows: http://api.loklak.org/api/search.json?q=fossasia&callback=__ng_jsonp__.__req2.finished&minified=true&source=twitter&maximumRecords=1&timezoneOffset=-330&startRecord=1   Now, as done above, if some response is received from media wall, next request is sent after 10 seconds after WallPaginationCompleteSuccess() from an effect by keeping debounceTime equal to 10000. In the similar way, new consecutive calls can be made by keeping ‘source = twitter’ and keeping count low for getting a proper focus on new feed. Reference Loklak API Documentation: http://loklak.org/api.html Inroduction to Ngrx Effects: https://github.com/ngrx/effects/blob/master/docs/intro.md Documentation on JsonP requests: https://stackoverflow.com/questions/36289495/how-to-make-a-simple-jsonp-asynchronous-request-in-angular-2

Continue ReadingLive Feeds in loklak Media wall using ‘source=twitter’

Preparing for Automatic Publishing of Android Apps in Play Store

I spent this week searching through libraries and services which provide a way to publish built apks directly through API so that the repositories for Android apps can trigger publishing automatically after each push on master branch. The projects to be auto-deployed are: Open Event Orga App Open Event Android PSLab Android Loklak Wok Android Phimpe Android SUSI Android I had eyes on fastlane for a couple of months and it came out to be the best solution for the task. The tool not only allows publishing of APK files, but also Play Store listings, screenshots, and changelogs. And that is only a subset of its capabilities bundled in a subservice supply. There is a process before getting started to use this service, which I will go through step by step in this blog. The process is also outlined in the README of the supply project. Enabling API Access The first step in the process is to enable API access in your Play Store Developer account if you haven’t done so. For that, you have to open the Play Dev Console and go to Settings > Developer Account > API access. If this is the first time you are opening it, you’ll be presented with a confirmation dialog detailing about the ramifications of the action and if you agree to do so. Read carefully about the terms and click accept if you agree with them. Once you do, you’ll be presented with a setting panel like this: Creating Service Account As you can see there is no registered service account here and we need to create one. So, click on CREATE SERVICE ACCOUNT button and this dialog will pop up giving you the instructions on how to do so: So, open the highlighted link in the new tab and Google API Console will open up, which will look something like this: Click on Create Service Account and fill in these details: Account Name: Any name you want Role: Project > Service Account Actor And then, select Furnish a new private key and select JSON. Click CREATE. A new JSON key will be created and downloaded on your device. Keep this secret as anyone with access to it can at least change play store listings of your apps if not upload new apps in place of existing ones (as they are protected by signing keys). Granting Access Now return to the Play Console tab (we were there in Figure 2 at the start of Creating Service Account), and click done as you have created the Service Account now. And you should see the created service account listed like this: Now click on grant access, choose Release Manager from Role dropdown, and select these PERMISSIONS: Of course you don’t want the fastlane API to access financial data or manage orders. Other than that it is up to you on what to allow or disallow. Same choice with expiry date as we have left it to never expire. Click on ADD USER and…

Continue ReadingPreparing for Automatic Publishing of Android Apps in Play Store