Maintaining Extension State in SUSI.AI Chrome Bot Using Chrome Storage API

SUSI Chrome Bot is a browser action chrome extension which is used to communicate with SUSI AI.The browser action extension in chrome is like any other web app that you visit. It will store all your preferences like theme settings and speech synthesis settings and data till you are interacting with it, but once you close it, it forgets all of your data unless you are saving it in some database or you are using cookies for the session. We want to be able to save the chats and other preferences like theme settings the user makes when interacting with SUSI AI through Susi Chrome Bot. In this blog, we’ll explore Chrome’s API for storing data.

What options do we have for storing data offline?

IndexedDB: IndexedDB is a low-level API for client-side storage of data. IndexedDB allows us to store a large amount of data and works like RDBMS but IndexedDB is javascript based Object-oriented database.

localStorage API: localStorage allows us to store data in key/value pairs which is much more effective than storing data in cookies. localStorage data persists even if the user closes and reopens the browser. Chrome provides us with It provides the same storage capabilities as localStorage API with some advantages.

For susi_chromebot we will use because of the following advantages it has over the localstorage API:

  1. User data can be automatically synced with Chrome sync if the user is logged in.
  2. The extension’s content scripts can directly access user data without the need for a background page.
  3. A user’s extension settings can be persisted even when using incognito mode.
  4. It’s asynchronous so bulk read and write operations are faster than the serial and blocking localStorage API.
  5. User data can be stored as objects whereas the localStorage API stores data in strings.

Integrating to susi_chromebot for storing chat data

To use we first need to declare the necessary permission in the extension’s manifest file. Add “storage” in the permissions key inside the manifest file.

"permissions": [


We want to store the chat user has made with SUSI. We will use a Javascript object to store the chat data.

var storageObj = {
senderClass: "",
content: ""

The storageObj object has two keys namely senderClass and content. The senderClass key represents the sender of the message(user or susi) whereas the content key holds the actual content of the message.

We will use and methods to store and retrieve data.

var susimessage = newDiv.innerHTML;
storageObj.content = susimessage;
storageObj.senderClass = "susinewmessage";"message",(items) => {
storageArr = items.message;
storageArr.push(storageObj);{"message":storageArr},() => {


In the above code snippet, susimessage contains the actual message content sent by the SUSI server. We then set the correct properties of the storageObj object that we declared earlier. Now we can use to save the storageObj object but that would overwrite the current data that we have inside chrome’s StorageArea. To prevent the old message data from getting overwritten, we’ll first get all the message content in our storage using Notice how we are passing the “message” string as the first perimeter to the function. This is done because we only want our message content which was saved in the StorageArea. If we pass null instead, it will return all the content inside storageArea. Once we have our messages (which will be an array of objects that we store as storageObj), we will store that into a new array storageArr. We will then push our new storageObj that contains the message and the sender into the array. Finally, we use to save the message content in chrome’s StorageArea which can later be retrieved using the “message” key.

storageArr.push(storageObj);{"message":storageArr},() => {

We use the same procedure to save messages sent by the user.

Note: is not very large, so we need to be careful about what we store or we may run out of storage space. Also, we should not store confidential data in storage since the storage area is not encrypted.



  • FOSSASIA, codeheat, Chrome extensions, Javascript, Chrome Storage, Chrome Sync, Susi Chrome Bot, SUSI AI, Bot Development

Adding Map Type Response to SUSI.AI Chromebot

SUSI.AI Chromebot has almost all sorts of reply that SUSI.AI Server can generate. But it still missed the Map Type response that was generated by the SUSI.AI Server.

This blog explains how the map type response was added to the chromebot.

Brief Introduction

The original issue was planned by Manish Devgan and Mohit Sharma as an advanced task for Google Code-In 2017. The link to which can be found here: #157

For a long time the issue remained untouched and after GCI got over I assigned the issue to myself as it was a priority issue since MAP type was a major response from the SUSI.AI Server.

How was Map Type response added?

There were a lot of things to be taken in mind before starting working on this issue.

  • Changing code scheme during GCI and other PRs
  • API Response from the SUSI.AI Server
  • Understanding the new codebase that got altered during GCI-17
  • Doing it quick

I will go through all the steps in detail

Changing Code Scheme

The code was altered numerous times with the addition of a number of pull requests during GCI-17 and there were no docstrings for any functions and methods. So I had to figure them out in order to start working on the map type response.

API Response from the SUSI.AI Server

To understand the JSON that server sent, I went to SUSI.AI API and did a simple search for

“Where is Berlin?” and the response generated is given below.

( Since the JSON is very big I am only posting the relevant data for this issue )


    "actions": [
        "type": "answer",
        "language": "en",
        "expression": "Berlin (, German: [bɛɐ̯ˈliːn] ( listen)) is the capital and the largest city of Germany as well as one of its 16 constituent states."
        "type": "anchor",
        "link": "",
        "text": "Here is a map",
        "language": "en"
        "type": "map",
        "latitude": "52.52436820069531",
        "longitude": "13.41053001275776",
        "zoom": "13",
        "language": "en"


Here we see and understand that “actions” is an Array of JSONs and the third part has “type” as “map”. This is the relevant information that we require for generating the map-type response.

The important variables in this context are: “latitude” and “longitude”.

Understanding the Codebase

Now I had to figure out the new pattern of adding response types to the SUSI.AI Chromebot.

After having a talk with @ms10398 I figured out the route map.

The above image shows the correct flow of Javascript Code that generated the response. After this, I was good to go and start my work.

Adding the Map-Type Response

To start with I chose “LEAFLET.JS” as the Javascript Library that will be used to create maps.

  • So I added the LEAFLET.JS to the JS folder.
  • Now changes were made to the “index.html” file


<link rel=”stylesheet” href=”[email protected]/dist/leaflet.css” />



Appropriate CSS was added along with a link to leaflet.js was added.

  • Adding CSS to the “mapClass

    height : 200px;

    width : 200px;



  • Generating Maps with dynamic IDs

This part was where I applied brain, as to add the map to any div we required the div to have a proper and unique ID and so a way to generate unique IDs for div without using any external source was to be thought of.

I came with idea of using timestamp, as it will always be unique.

var timeStamp = new;


Then I created the “composeMapReply()” function.

function composeReplyMap(response, action){

    var newDiv = messages.childNodes[messages.childElementCount];

    var mapDiv = document.createElement(“div”);

    var mapDivId =;

    mapDiv.setAttribute(“id”, mapDivId);

    mapDiv.setAttribute(“class”, “mapClass”);



    var newMap =[Number(action.latitude), Number(action.longitude)], 13);



        This part contains the data for api call.



response.isMap = true;

response.newMap = mapDiv;

return response




The complete code can be found: here

At last after adding so many snippets of code we were able to generate the Map-Type response for SUSI.AI Chromebot


A gif showing the Map-Type response in action.




Toggling Voice On/Off in SUSI Chromebot

SUSI Chromebot has a lot of features that make it one of the best projects of FOSSASIA.

Recently Voice/Speech was added to SUSI Chromebot. But there was no option that controlled the fact that whether speech output is needed or not.

The latest addition to SUSI Chromebot is Toggling the Voice of SUSI On or Off.

How was it achieved?

Toggling Voice for SUSI required adding a button and a snippet of Javascript code to the main JS file. The code will take care of the fact whether the voice is to be toggled on or off.

I started off by adding a button to the main HTML file.

<a href=”javascript: void(0)” id=”speak” style=”color: white”><i class=”material-icons” id=”speak-icon”>volume_up</i></a>

The above snippet of HTML code adds a voice button to the top bar of chromebot.

Then there was the major part where the javascript code was to be added to add the functionality to the button.

var shouldSpeak = true;

I started off by creating a variable called as “shouldSpeak” which will determine whether or not SUSI should use the Chrome’s API to speak.

Then I changed the “speakOut()” function and added another parameter to it.

function speakOut(msg,speak=false){


var voiceMsg = new SpeechSynthesisUtterance(msg);




The above code made sure that susi was only allowed to speak when and only “speak” variable was set to true.

Then “eventListeners” were added to buttons and other things to link the functionality.


It adds the events of click to “speak” and associates it with the function “changeSpeak”.

Now the function “changeSpeak” is created as follows. It toggles the on/off mechanism of voice in SUSI Chromebot.

function changeSpeak(){

shouldSpeak = !shouldSpeak;

var SpeakIcon = document.getElementById(‘speak-icon’);


SpeakIcon.innerText = “volume_off”;



SpeakIcon.innerText = “volume_up”;


console.log(‘Should be speaking? ’ + shouldSpeak);


Everytime the user clicks on the icon to toggle on/off voice the icon must also change and this functionality was taken care of by the above piece of code.



Setting up SUSI Desktop Locally for Development and Using Webview Tag and Adding Event Listeners

SUSI Desktop is a cross platform desktop application based on electron which presently uses as a submodule and allows the users to interact with susi right from their desktop.

Any electron app essentially comprises of the following components

    • Main Process (Managing windows and other interactions with the operating system)
    • Renderer Process (Manage the view inside the BrowserWindow)

Steps to setup development environment

      • Clone the repo locally.
$ git clone
$ cd susi_desktop
      • Install the dependencies listed in package.json file.
$ npm install
      • Start the app using the start script.
$ npm start

Structure of the project

The project was restructured to ensure that the working environment of the Main and Renderer processes are separate which makes the codebase easier to read and debug, this is how the current project is structured.

The root directory of the project contains another directory ‘app’ which contains our electron application. Then we have a package.json which contains the information about the project and the modules required for building the project and then there are other github helper files.

Inside the app directory-

  • Main – Files for managing the main process of the app
  • Renderer – Files for managing the renderer process of the app
  • Resources – Icons for the app and the tray/media files
  • Webview Tag

    Display external web content in an isolated frame and process, this is used to load in a BrowserWindow as

    <webview src=""></webview>

    Adding event listeners to the app

    Various electron APIs were used to give a native feel to the application.

  • Send focus to the window WebContents on focussing the app window.
  • win.on('focus', () => {
  • Display the window only once the DOM has completely loaded.
  • const page = mainWindow.webContents;
    page.on('dom-ready', () => {;
  • Display the window on ‘ready-to-show’ event
  • win.once('ready-to-show', () => {;


    1. A quick article to understand electron’s main and renderer process by Cameron Nokes at Medium link
    2. Official documentation about the webview tag at
    3. Read more about electron processes at
    4. SUSI Desktop repository at

    Link Preview Holder on SUSI.AI Android Chat

    SUSI Android contains several view holders which binds a view based on its type, and one of them is LinkPreviewHolder. As the name suggests it is used for previewing links in the chat window. As soon as it receives an input as of link it inflates a link preview layout. The problem which exists was that whenever a user inputs a link as an input to app, it crashed. It crashed because it tries to inflate component that doesn’t exists in the view that is given to ViewHolder. So it gave a Null pointer Exception, due to which the app crashed. The work around for fixing this bug was that based on the type of user it will inflate the layout and its components. Let’s see how all functionalities were implemented in the LinkPreviewHolder class.

    Components of LinkPreviewHolder

    public TextView text;
    public LinearLayout backgroundLayout;
    public ImageView previewImageView;
    public TextView titleTextView;
    public TextView descriptionTextView;
    public TextView timestampTextView;
    public LinearLayout previewLayout;
    @Nullable @BindView(
    public ImageView receivedTick;
    protected ImageView thumbsUp;
    protected ImageView thumbsDown;

    Currently in this it binds the view components with the associated id using declarator @BindView(id)

    Instantiates the class with a constructor

    public LinkPreviewViewHolder(View itemView , ClickListener listener) {
       super(itemView, listener);
       realm = Realm.getDefaultInstance();

    Here it binds the current class with the view passed in the constructor using ButterKnife and initiates the ClickListener.

    Now it is to set the components described above in the setView function:

    Spanned answerText;
    if (android.os.Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {
    answerText = Html.fromHtml(model.getContent(), Html.FROM_HTML_MODE_COMPACT);
    } else {
    answerText = Html.fromHtml(model.getContent());

    Sets the textView inside the view with a clickable link. Version checking also has been put for checking the version of Android (Above Nougat or not) and implement the function accordingly.

    This ViewHolder will inflate different components based on the thing that who has requested the output. If the query wants to inflate the LinkPreviewHolder them some extra set of components will get inflated which need not be inflated for the response apart from the basic layout.

    if (viewType == USER_WITHLINK) {
       if (model.getIsDelivered())

    In the above code  received tick image resource is set according to the attribute of message is delivered or not for the Query sent by the user. These components will only get initialised when the user has sent some links.

    Now comes the configuration for the result obtained from the query.  Every skill has some rating associated to it. To mark the ratings there needs to be a counter set for rating the skills, positive or negative. This code should only execute for the response and not for the query part. This is the reason for crashing of the app because the logic tries to inflate the contents of the part of response but the view that is passed belongs to query. So it gives NullPointerException there, so there is a need to separate the logic of Response from the Query.

    if (viewType != USER_WITHLINK) {
       } else {
       } else {
       } else {
       thumbsUp.setOnClickListener(new View.OnClickListener() {
           public void onClick(View view) { . . . }
       thumbsDown.setOnClickListener(new View.OnClickListener() {
           public void onClick(View view) { . . . }

    As you can see in the above code  it inflates the rating components (thumbsUp and thumbsDown) for the view of the SUSI.AI response and set on the clickListeners for the rating buttons. Them in the below code it previews the link and commit the data using Realm in the database through WebLink class.

    LinkPreviewCallback linkPreviewCallback = new LinkPreviewCallback() {
       public void onPre() { . . . }
       public void onPos(final SourceContent sourceContent, boolean b) { . . . }

    This method calls the api and set the rating of that skill on the server. On successful result it made the thumb Icon change and alter the rating method and commit those changes in the databases using Realm.

    private void rateSusiSkill(final String polarity, String locationUrl, final Context context) {..}


    SUSI.AI Chrome Bot and Web Speech: Integrating Speech Synthesis and Recognition

    Susi Chrome Bot is a Chrome extension which is used to communicate with Susi AI. The advantage of having chrome extensions is that they are very accessible for the user to perform certain tasks which sometimes needs the user to move to another tab/site.

    In this blog post, we will be going through the process of integrating the web speech API to SUSI Chromebot.

    Web Speech API

    Web Speech API enables web apps to be able to use voice data. The Web Speech API has two components:

    Speech Recognition:  Speech recognition gives web apps the ability to recognize voice data from an audio source. Speech recognition provides the speech-to-text service.

    Speech Synthesis: Speech synthesis provides the text-to-speech services for the web apps.

    Integrating speech synthesis and speech recognition in SUSI Chromebot

    Chrome provides the webkitSpeechRecognition() interface which we will use for our speech recognition tasks.

    var recognition = new webkitSpeechRecognition();


    Now, we have a speech recognition instance recognition. Let us define necessary checks for error detection and resetting the recognizer.

    var recognizing;
    function reset() {
    recognizing = false;
    recognition.onerror = function(e){
    recognition.onend = function(){


    We now define the toggleStartStop() function that will check if recognition is already being performed in which case it will stop recognition and reset the recognizer, otherwise, it will start recognition.

    function toggleStartStop() {
        if (recognizing) {
        } else {
          recognizing = true;


    We can then attach an event listener to a mic button which calls the toggleStartStop() function to start or stop our speech recognition.

    mic.addEventListener("click", function () {


    Finally, when the speech recognizer has some results it calls the onresult event handler. We’ll use this event handler to catch the results returned.

    recognition.onresult = function (event) {
        for (var i = event.resultIndex; i < event.results.length; ++i) {
          if (event.results[i].isFinal) {
            textarea.value = event.results[i][0].transcript;


    The above code snipped tests for the results produced by the speech recognizer and if it’s the final result then it sets textarea value with the result of speech recognition and then we submit that to the backend.

    One problem that we might face is the extension not being able to access the microphone. This can be resolved by asking for microphone access from an external tab/window/iframe. For SUSI Chromebot this is being done using an external tab. Pressing on the settings icon makes a new tab which then asks for microphone access from the user. This needs to be done only once, so that does not cause a lot of trouble.

    setting.addEventListener("click", function () {
    url: chrome.runtime.getURL("options.html")
    audio: true
    }, function(stream) {
    }, function () {
    console.log('no access');


    In contrast to speech recognition, speech synthesis is very easy to implement.

    function speakOutput(msg){
        var voiceMsg = new SpeechSynthesisUtterance(msg);


    This function takes a message as input, declares a new SpeechSynthesisUtterance instance and then calls the speak method to convert the text message to voice.

    There are many properties and attributes that come with this speech recognition and synthesis interface. This blog post only introduces the very basics.



    Enhancing SUSI Desktop to Display a Loading Animation and Auto-Hide Menu Bar by Default

    SUSI Desktop is a cross platform desktop application based on electron which presently uses as a submodule and allows the users to interact with susi right from their desktop. The benefits of using as a submodule is that it inherits all the features that the webapp offers and thus serves them in a nicely build native application.

    Display a loading animation during DOM load.

    Electron apps should give a native feel, rather than feeling like they are just rendering some DOM, it would be great if we display a loading animation while the web content is actually loading, as depicted in the gif below is how I implemented that.
    Electron provides a nice, easy to use API for handling BrowserWindow, WebContent events. I read through the official docs and came up with a simple solution for this, as depicted in the below snippet.

    onload = function () {
    	const webview = document.querySelector('webview');
    	const loading = document.querySelector('#loading');
    	function onStopLoad() {
    	function onStartLoad() {
    	webview.addEventListener('did-stop-loading', onStopLoad);
    	webview.addEventListener('did-start-loading', onStartLoad);

    Hiding menu bar as default

    Menu bars are useful, but are annoying since they take up space in main window, so I hid them by default and users can toggle their display on pressing the Alt key at any point of time, I used the autoHideMenuBar property of BrowserWindow class while creating an object to achieve this.

    const win = new BrowserWindow({
    	show: false,
    	autoHideMenuBar: true


    1. More information about BrowserWindow class in the official documentation at
    2. Follow a quick tutorial to kickstart creating apps with electron at
    3. SUSI Desktop repository at

    Showing “Get started” button in SUSI Viber bot

    When we start a chat with SUSI.AI on Viber i.e. SUSI Viberbot, there should be an option on how to get started with the bot. The response to it are some options like “Visit repository”, “How to contribute” which direct the user to check how SUSI.AI bot is made and prompts him/her to contribute to it. Along with that an option of “start chatting” can be shown to add up some sample queries for the user to try.

    To accomplish the task at hand, we will accomplish these sub tasks:

    1. To show the “Get started” button.
    2. To show the reply to “Get started” query.
    3. To respond to the queries, nested in the response of “Get started”

    Showing “Get started”:

    The Viber developers platform notifies us when a user starts a conversation with our bot. To be exact, a conversation_started event is sent to our webhook and can be handled accordingly. The Viberbot shows a welcome message to the user along with a Get started button to help him/her start.

    To send just the welcome message:

    if (req.body.event === 'conversation_started') {
           // Welcome Message
           var options = {
               method: 'POST',
               url: '',
               headers: headerBody,
               body: {
                   // some required body properties here
                   text: 'Welcome to SUSI.AI!, ' + + '.',
                   // code for showing the get started button here.
               json: true
           request(options, function(error, res, body) {
               // handle error

    The next step is to show the “Get started” button. To show that we use a keyboard tool, provided by Viber developers platform. So after the “text” key we have the “keyboard” key and a value for it:

    keyboard: {
                 "Type": "keyboard",
                 "DefaultHeight": true,
                 "Buttons": [{
                     "ActionType": "reply",
                     "ActionBody": "Get started",

    The action type as shown in the code, can be “reply” or “open-url”. The “reply” action type, triggers an automatic query sent back with “Get started” (i.e. the value of “ActionBody” key), when that button gets clicked.

    Hence, this code helps us tackle our first sub task:

    Reply to “Get started”:

    We target to make each SUSI.AI bot generic. The SUSI FBbot and SUSI Tweetbot shows some options like “Visit repository”, “Start chatting” and “How to contribute?” for the “Get started” query. We render the same answer structure in Viberbot.

    The “rich_media” type helps us send buttons in our reply message. As we ought to use three buttons in our message, the button rows are three in the body object:

    if(message === "Get started"){
                       var options = {
                           method: 'POST',
                           url: '',
                           headers: headerBody,
                           body: {
                               // some body object properties here
                               type: 'rich_media',
                               rich_media: {
                                   Type: "rich_media",
                                   ButtonsGroupColumns: 6,
                                   ButtonsGroupRows: 3,
                                   BgColor: "#FFFFFF",
                                   Buttons: buttons
                           json: true
                       request(options, function(error, res, body) {
                           if (error) throw new Error(error);

    As said before, 2 type of Action types are available – “open-url” and “reply”. “Visit repository” button has an “open-url” action type and “How to contribute?” or “start chatting” has a “reply” action type.

    Example of “Visit repository” button:

    var buttons = [{
                    Columns: 6,
                    Rows: 1,
                    Text: "Visit repository",
                    "ActionType": "open-url",
                    "ActionBody": "",
                    // some text styling properties here

    To respond to the “reply” action type queries:

    When the “reply” action type button gets clicked, it triggers an automatic query sent back to the bot with the value same as that of the “ActionBody” key. So we just need to apply a check if the message string recieved is “Start chatting” or “How to contribute?”

    For the response to “Start chatting”, we plan to show sample queries for the user to try. This can be shown by using buttons with the action type as “reply”.

    Code snippet to show a button with the text as “What is FOSSASIA?”:

    var buttons = [{
                            Columns: 6,
                            Rows: 1,
                            Text: "What is FOSSASIA? ",
                            "ActionType": "reply",
                            "ActionBody": "What is FOSSASIA?",
                            // text styling here

    For the response to “How to contribute”, we show some messages to help the user contribute to SUSI.AI. These messages also just need buttons with it, to be able to apply the necessary action.

    We respond with 2 messages to the user, both the messages have a button.

    For example, a button to visit the SUSI.AI Gitter channel:

    var buttons = [{
                        Columns: 6,
                        Rows: 1,
                           Text: "<font color=#323232><b>Chat on Gitter</b></font>",
                          ActionType: "open-url",
                          ActionBody: "",
                          // text styling here

    This way we have successfully added the “Get started” option to our Viberbot and handled all the subsequent steps.


    1. Viber video managing chat extensions by Ingrid Lunden from Tech crunch.
    2. Develop a chat bot with node js by Slobodan Stojanović from smashing magazine.

    Creating Settings Screen in SUSI Android Using PreferenceActivity and Kotlin

    An Android application often includes settings that allow the user to modify features of the app. For example, SUSI Android app allows users to specify whether they want to use in built mic to give speech input or not. Different settings in SUSI Android app and their purpose are given below

    Setting                                        Purpose
    Enter As Send It allows users to specify whether they want to use enter key to send message or to add new line.
    Mic Input It allows users to specify whether they want to use in built mic to give speech input or not.
    Speech Always It allows users to specify whether they want voice output in case of speech input or not.
    Speech Output It allows users to specify whether they want speech output irrespective of input type or not.
    Language It allows users to set different query language.
    Reset Password It allows users to change password.
    Select Server It allows users to specify whether they want to use custom server or not.

    Android provides a powerful framework, Preference framework, that allows us to define the way we want preferences. In this blog post, I will show you how Settings UI is created using Preference framework and Kotlin in SUSI Android.

    Advantages of using Preference are:

    • It has own UI so we don‘t have to develop our own UI for it
    • It stores the string into the SharedPreferences so we don’t need to manage the values in SharedPreference.

    First, we will add the dependency in build.gradle(project) file as shown below.

    compile ‘com.takisoft.fix:preference-v7:’

    To create the custom style for our Settings Activity screen we can set


    as the base theme and can apply various other modifications and colour over this. By default, it has the usual Day and Night theme with NoActionBar extension.

    Layout Design

    I used PreferenceScreen as the main container to create UI of Settings and filled it with the other components. Different components used are following:

    • SwitchPreferenceCompat: This gives us the Switch Preference which we can use to toggle between two different modes in the setting.


    • PreferenceCategory: It is used for grouping the preference. For example, Chat Settings, Mic Settings, Speech Settings etc are different groups in settings.

    • ListPreference: This preference display list of values and help in selecting one. For example in setLanguage option ListPreference is used to show a list of query language. List of query language is provided via xml file array.xml (res/values). Attribute android:entries point to arrays languagentries and android:entryValue holds the corresponding value defined for each of the languages.





    Implementation in SUSI Android

    All the logic related to Preferences and their action is written in ChatSettingsFragment class. ChatSettingsFragment extends PreferenceFragmentCompat class.

    class ChatSettingsFragment : PreferenceFragmentCompat()

    Fragment populate the preferences when created. addPreferencesFromResource method is used to inflate view from xml.



    Implement Internationalization in SUSI Android With Weblate

    When you build an Android app, you must consider about users for whom you are building an app. It may be possible that you users are from the different region. To support the most users your app should show text in locale language so that user can use your app easily. Our app SUSI Android is also targeting users from different regions. Internationalization is a way that ensures our app can be adapted to various languages without requiring any change to source code. This also allows projects to collaborate with non-coders more easily and plugin translation tools like Weblate.

    Benefits of using Internationalization are:

    • It reduces the time for localization i.e it will localize your app automatically.
    • It helps us to keep and maintain only single source code for different regions.

    To achieve Internationalization in Android app you must follow below steps:

    • Move all the required contents of your app’s user interface into the resource file.
    • Create new directories inside res to add support for Internationalization. Each directory’s name should follow rule <resource type>-(language code). For example values-es contains string resource for es language code i.e Spanish.
    • Now add different locale content in the respective folder.

    We need to create separate directories for different locale because to show locale specific content, Android check specific folder i.e res/<resource type>-(language code) like res/values-de and show content from that folder. That’s why we need to move all the required content into resource file so that each required content can be shown in the specific locale.

    How Internationalization is implemented in SUSI Android

    In SUSI Android there is not any locale specific image but only string. So I created only locale specific value resource folder to add locale specific strings. To create locale specific values folder I follow the above-mentioned rule i.e <resource type>-(language code).

    After that, I added specific language string in the respective folder.

    Instead of hard-coded strings, we used strings from string.xml file so that it will change automatically according to the region.




    In absence of resource directory for any specific locale, Android use default resource directory.

    Integrate Weblate in SUSI Android

    Weblate is a web based translation tool. The best part of Weblate is its tight version control integration which makes it easy for translators to contribute because translator does not need to fork your repo and send pull request for each change but Weblate handle this part i.e translator translate strings of the project in Weblate site and Weblate will send pull request for those changes.

    Weblate can host your free software projects for free but it depends on them. Here is the link of SUSI Android project hosted on Weblate. If your project is good then they can host your project for free. But for that, you have to apply from this link and select ask for hosting. Now fill up form as shown in below picture.

    Once your project is hosted on Weblate, they will email you about it. After that, you have to integrate Weblate in your project so that Weblate can automatically push translated strings to your project and also Weblate get notify about changes in your repository. Here is the link on how to add Weblate service and Weblate user to your project.

    If it is not possible to host your project on Weblate for free then you can host it by own. You can follow below steps:

    • First, we deploy Weblate on our localhost using the installation guide given on Weblate site. I install Weblate from git. I cloned latest source code using Git
    git clone
    • Now change directory to where you cloned weblate source code and install all the required dependencies and optional dependencies using code
    pip install -r requirements.txt


    pip install -r requirements-optional.txt
    • After doing that we copy weblate/ to weblate/ Then we configure and use the following command to migrate the settings.
    ./ migrate
    • Now create an admin using following command.
    ./ createadmin
    • After that add a project from your Admin dashboard (Web translations-> Projects-> Add Project) by filling all details.
    • Once the project is added, we add the component (Web translations-> Components-> Add Component) to link our Translation files.
    • To change any translation we make changes and push it to the repository where our SSH key generated from Weblate is added. A full guide to do that is mentioned in this link.