Maintaining Extension State in SUSI.AI Chrome Bot Using Chrome Storage API

SUSI Chrome Bot is a browser action chrome extension which is used to communicate with SUSI AI.The browser action extension in chrome is like any other web app that you visit. It will store all your preferences like theme settings and speech synthesis settings and data till you are interacting with it, but once you close it, it forgets all of your data unless you are saving it in some database or you are using cookies for the session. We want to be able to save the chats and other preferences like theme settings the user makes when interacting with SUSI AI through Susi Chrome Bot. In this blog, we’ll explore Chrome’s API for storing data.

What options do we have for storing data offline?

IndexedDB: IndexedDB is a low-level API for client-side storage of data. IndexedDB allows us to store a large amount of data and works like RDBMS but IndexedDB is javascript based Object-oriented database.

localStorage API: localStorage allows us to store data in key/value pairs which is much more effective than storing data in cookies. localStorage data persists even if the user closes and reopens the browser. Chrome provides us with It provides the same storage capabilities as localStorage API with some advantages.

For susi_chromebot we will use because of the following advantages it has over the localstorage API:

  1. User data can be automatically synced with Chrome sync if the user is logged in.
  2. The extension’s content scripts can directly access user data without the need for a background page.
  3. A user’s extension settings can be persisted even when using incognito mode.
  4. It’s asynchronous so bulk read and write operations are faster than the serial and blocking localStorage API.
  5. User data can be stored as objects whereas the localStorage API stores data in strings.

Integrating to susi_chromebot for storing chat data

To use we first need to declare the necessary permission in the extension’s manifest file. Add “storage” in the permissions key inside the manifest file.

"permissions": [


We want to store the chat user has made with SUSI. We will use a Javascript object to store the chat data.

var storageObj = {
senderClass: "",
content: ""

The storageObj object has two keys namely senderClass and content. The senderClass key represents the sender of the message(user or susi) whereas the content key holds the actual content of the message.

We will use and methods to store and retrieve data.

var susimessage = newDiv.innerHTML;
storageObj.content = susimessage;
storageObj.senderClass = "susinewmessage";"message",(items) => {
storageArr = items.message;
storageArr.push(storageObj);{"message":storageArr},() => {


In the above code snippet, susimessage contains the actual message content sent by the SUSI server. We then set the correct properties of the storageObj object that we declared earlier. Now we can use to save the storageObj object but that would overwrite the current data that we have inside chrome’s StorageArea. To prevent the old message data from getting overwritten, we’ll first get all the message content in our storage using Notice how we are passing the “message” string as the first perimeter to the function. This is done because we only want our message content which was saved in the StorageArea. If we pass null instead, it will return all the content inside storageArea. Once we have our messages (which will be an array of objects that we store as storageObj), we will store that into a new array storageArr. We will then push our new storageObj that contains the message and the sender into the array. Finally, we use to save the message content in chrome’s StorageArea which can later be retrieved using the “message” key.

storageArr.push(storageObj);{"message":storageArr},() => {

We use the same procedure to save messages sent by the user.

Note: is not very large, so we need to be careful about what we store or we may run out of storage space. Also, we should not store confidential data in storage since the storage area is not encrypted.



  • FOSSASIA, codeheat, Chrome extensions, Javascript, Chrome Storage, Chrome Sync, Susi Chrome Bot, SUSI AI, Bot Development

Adding Map Type Response to SUSI.AI Chromebot

SUSI.AI Chromebot has almost all sorts of reply that SUSI.AI Server can generate. But it still missed the Map Type response that was generated by the SUSI.AI Server.

This blog explains how the map type response was added to the chromebot.

Brief Introduction

The original issue was planned by Manish Devgan and Mohit Sharma as an advanced task for Google Code-In 2017. The link to which can be found here: #157

For a long time the issue remained untouched and after GCI got over I assigned the issue to myself as it was a priority issue since MAP type was a major response from the SUSI.AI Server.

How was Map Type response added?

There were a lot of things to be taken in mind before starting working on this issue.

  • Changing code scheme during GCI and other PRs
  • API Response from the SUSI.AI Server
  • Understanding the new codebase that got altered during GCI-17
  • Doing it quick

I will go through all the steps in detail

Changing Code Scheme

The code was altered numerous times with the addition of a number of pull requests during GCI-17 and there were no docstrings for any functions and methods. So I had to figure them out in order to start working on the map type response.

API Response from the SUSI.AI Server

To understand the JSON that server sent, I went to SUSI.AI API and did a simple search for

“Where is Berlin?” and the response generated is given below.

( Since the JSON is very big I am only posting the relevant data for this issue )


    "actions": [
        "type": "answer",
        "language": "en",
        "expression": "Berlin (, German: [bɛɐ̯ˈliːn] ( listen)) is the capital and the largest city of Germany as well as one of its 16 constituent states."
        "type": "anchor",
        "link": "",
        "text": "Here is a map",
        "language": "en"
        "type": "map",
        "latitude": "52.52436820069531",
        "longitude": "13.41053001275776",
        "zoom": "13",
        "language": "en"


Here we see and understand that “actions” is an Array of JSONs and the third part has “type” as “map”. This is the relevant information that we require for generating the map-type response.

The important variables in this context are: “latitude” and “longitude”.

Understanding the Codebase

Now I had to figure out the new pattern of adding response types to the SUSI.AI Chromebot.

After having a talk with @ms10398 I figured out the route map.

The above image shows the correct flow of Javascript Code that generated the response. After this, I was good to go and start my work.

Adding the Map-Type Response

To start with I chose “LEAFLET.JS” as the Javascript Library that will be used to create maps.

  • So I added the LEAFLET.JS to the JS folder.
  • Now changes were made to the “index.html” file


<link rel=”stylesheet” href=”[email protected]/dist/leaflet.css” />



Appropriate CSS was added along with a link to leaflet.js was added.

  • Adding CSS to the “mapClass

    height : 200px;

    width : 200px;



  • Generating Maps with dynamic IDs

This part was where I applied brain, as to add the map to any div we required the div to have a proper and unique ID and so a way to generate unique IDs for div without using any external source was to be thought of.

I came with idea of using timestamp, as it will always be unique.

var timeStamp = new;


Then I created the “composeMapReply()” function.

function composeReplyMap(response, action){

    var newDiv = messages.childNodes[messages.childElementCount];

    var mapDiv = document.createElement(“div”);

    var mapDivId =;

    mapDiv.setAttribute(“id”, mapDivId);

    mapDiv.setAttribute(“class”, “mapClass”);



    var newMap =[Number(action.latitude), Number(action.longitude)], 13);



        This part contains the data for api call.



response.isMap = true;

response.newMap = mapDiv;

return response




The complete code can be found: here

At last after adding so many snippets of code we were able to generate the Map-Type response for SUSI.AI Chromebot


A gif showing the Map-Type response in action.




Toggling Voice On/Off in SUSI Chromebot

SUSI Chromebot has a lot of features that make it one of the best projects of FOSSASIA.

Recently Voice/Speech was added to SUSI Chromebot. But there was no option that controlled the fact that whether speech output is needed or not.

The latest addition to SUSI Chromebot is Toggling the Voice of SUSI On or Off.

How was it achieved?

Toggling Voice for SUSI required adding a button and a snippet of Javascript code to the main JS file. The code will take care of the fact whether the voice is to be toggled on or off.

I started off by adding a button to the main HTML file.

<a href=”javascript: void(0)” id=”speak” style=”color: white”><i class=”material-icons” id=”speak-icon”>volume_up</i></a>

The above snippet of HTML code adds a voice button to the top bar of chromebot.

Then there was the major part where the javascript code was to be added to add the functionality to the button.

var shouldSpeak = true;

I started off by creating a variable called as “shouldSpeak” which will determine whether or not SUSI should use the Chrome’s API to speak.

Then I changed the “speakOut()” function and added another parameter to it.

function speakOut(msg,speak=false){


var voiceMsg = new SpeechSynthesisUtterance(msg);




The above code made sure that susi was only allowed to speak when and only “speak” variable was set to true.

Then “eventListeners” were added to buttons and other things to link the functionality.


It adds the events of click to “speak” and associates it with the function “changeSpeak”.

Now the function “changeSpeak” is created as follows. It toggles the on/off mechanism of voice in SUSI Chromebot.

function changeSpeak(){

shouldSpeak = !shouldSpeak;

var SpeakIcon = document.getElementById(‘speak-icon’);


SpeakIcon.innerText = “volume_off”;



SpeakIcon.innerText = “volume_up”;


console.log(‘Should be speaking? ’ + shouldSpeak);


Everytime the user clicks on the icon to toggle on/off voice the icon must also change and this functionality was taken care of by the above piece of code.



Comparison between SUSI AI with Mycroft AI and Amazon Alexa

Now is the era of Voice User Interface (VUI) devices and they play a very important role as personal assistants. Here we compare the SUSI AI, Mycroft AI and Amazon Alexa based on the number of skills, their availability, easiness to add and edit skills and the provision of the user to modify the skill and add more to it if needed, etc.


The Comparison:

  1. Starting with the number of skills, here Amazon Alexa supports way more number of skills as compared to both Mycroft AI and SUSI AI.
  2. Availability: Mycroft AI and SUSI AI are available everywhere and can set up anywhere regardless of the country whereas Alexa is available in U.S., U.K., Germany,  India but they are aggressively expanding.
  3. Adding and editing skills: Mycroft and SUSI are open source and their skills can be added and edited and viewed by the open source community. Issues can be made to enhance the functionality of the skills whereas Alexa skills are not open source and certification and publishing of the skill is done by the Amazon team. Mycroft and SUSI skills can be customized by the user but this fails with Alexa as users have to create that same skill from scratch if they have to customize them.
  4. Platforms supported: Mycroft, SUSI and Alexa all support Linux. Mycroft lacks support for Windows and Mac but supports Raspberry Pi and Android, Alexa provides support for Windows and Mac and Raspberry Pi. SUSI also provides support for Android and iOS and can be integrated with speakers, vehicles, Pi, etc.
  5. Dedicated devices: As of now SUSI AI lacks such device. Mycroft has Mark 1 and Alexa has Echo. These devices are portable and are good candidates for home automation.
  6. Languages used for skill development: Mycroft mostly uses python. Alexa uses python, NodeJS, C#, etc for development of applications. SUSI uses its own language but language like javascript can be included in it. It’s easier to specify patterns using wildcards and variables in SUSI.

Due to different languages used, Mycroft AI skills can’t be directly used in SUSI AI. We need to convert Mycroft skills to SUSI skills if Mycroft skills are to be used for SUSI.

Some suggestions for making a dedicated device for SUSI:

  1. We can use a Raspberry Pi, USB headphones and a microphone to make a basic platform.
  2. We can install Jasper to enable the voice input on the Pi. Jasper is a open source application that enables us to make voice controlled applications.
  3. We can use SUSI server to interact with the device and the home appliances like lights. SUSI server can process the states of the the appliance (lights in this case) and return it as JSON objects to Raspberry Pi and then it may change the state as per user input.

Make a simple Hello World skill with SUSI:

  1. Visit for a basic introduction to SUSI skills syntax and how does it work.
  2. Go to .
  3. Enter the skill name, say “hello”.
  4. You will be greeted by a welcome message – “roses are red…..”. Delete it and replace it with the following snippet.
::name <Skill_name> #<— Enter skill name. for example hello

::author <author_name>

::author_url <author_url> #<— You can leave this empty as of now.

::description <description> #<— skill description

::dynamic_content No

::developer_privacy_policy <link> #<— you can leave this as of now.

::image <image_url> #<— You can leave this as of now.

::term_of_use <link>

#Intent. Comments are written with a #

hi|hello|what’s up #<— This is what the user says

Hi|I am good|Hello #<— This is what the skill answers

6. Now go to for the testing.

7. In the SUSI chat dialog box (present at the bottom of the page) enter dream <test application name> where “test application name” is the name you enter when you first visit In this case “dream hello”.

8. You can input “what’s up” in the dialog box and it will give you the desired output which you mentioned in the application.


SUSI has its own good points but it lacks in some department like the number and type of skills. Like Mycroft we can start making various skills and try to make a basic prototype of a dedicated SUSI personal assistant device.


  1. Jasper
  2. Skill addition to SUSI
  3. Mycroft hello world skill


Automatically deploy SUSI Web Chat on surge after Travis passes

We are using surge from the very beginning of this SUSI web chat and SUSI skill cms projects development. We used surge for provide preview links for Pull requests. Surge is really easy tool to use. We can deploy our static web pages really easily and quickly.  But If user had to change something in pull request user has to deploy again in surge and update the link. If we can connect this operation with travis ci we can minimise re-works. We can embed the deploying commands inside the travis.yml.

We can tell travis to make a preview link (surge deployment) if test cases are passed by embedding the surge deployment commands inside the travis.yml like below.

This is travis.yml file

sudo: required
dist: trusty
language: node_js
 - 6
 - npm test
 - bash ./
 - bash ./
   - node_modules
   - master

Surge deployment commands are inside the “” file.
In that we have to check the status of the pull request whether it is passing test cases or not. We can do it like below.

if [ "$TRAVIS_PULL_REQUEST" == "false" ]; then
   echo "Not a PR. Skipping surge deployment"
   exit 0

Then we have to install surge in the environment. Then after install all npm packages and run build.

npm i -g surge
npm install
npm run build

Since there is a issue with displaying moving to child routes we have to take a copy of index.html file and name it as a 404.html.

cp ./build/index.html ./build/404.html

Then make two environment variables for your surge email address and surge token

export [email protected]
# surge Token (run ‘surge token’ to get token)
export SURGE_TOKEN=d1c28a7a75967cc2b4c852cca0d12206

Now we have to make the surge deployment URL (Domain). It should be unique so we made a URL that contains pull request number.

surge --project ./build/ --domain $DEPLOY_DOMAIN;

Since all our static contents which made after the build process are in “build” folder we have to tell surge to get static html files from that.
Now make a pull request. you would find the deployment link in travis ci report after travis passed.

Expand the output of the

You will find the deployment link as we defined in the file


  • Integrating with travis ci –
  • React Routes to Deploy 404 page on gh-pages and surge –

Recognise new SUSI users and Welcome them

SUSI web chat application is up and running now. It gives better answers for most of the questions that users ask. But for new users application does not display a welcome message or introduction about the application. It is a distraction for new users. So a new requirement arrived that is to show a welcome message for new users or give them a introduction about the application.

To give a introduction or to show a welcome message we need to identify new users. For that I used cookies.
I added a new dialog to show welcome message and introductory video. Then placed below code in the DialogSection.js file which contains codes about every dialog-box of the application.

          contentStyle={{ width: '610px' }}
          title="Welcome to SUSI Web Chat"
          <Close style={closingStyle} onTouchTap={this.props.onRequestCloseTour()} />

We already have installed ‘universal-cookie’ npm module in our application so we can use this module to identify cookies.
I used this way to check whether user is new or not.



Now it shows dialog-box for each and every user we don’t need to display the welcome message to old users so we need to store a cookie in users computer.
I stored a cookie in users computer when user clicks on the close button of the welcome dialog-box.
Below function makes new cookie in users computer.

  handleCloseTour = ()=>{
     showLogin: false,
     showSignUp: false,
     showThemeChanger: false,
     openForgotPassword: false,
    cookies.set('visited', true, { path: '/' });


Below line sets a cookie and { path : ’/’ } makes cookie accessible on all pages.


Setting up SUSI Desktop Locally for Development and Using Webview Tag and Adding Event Listeners

SUSI Desktop is a cross platform desktop application based on electron which presently uses as a submodule and allows the users to interact with susi right from their desktop.

Any electron app essentially comprises of the following components

    • Main Process (Managing windows and other interactions with the operating system)
    • Renderer Process (Manage the view inside the BrowserWindow)

Steps to setup development environment

      • Clone the repo locally.
$ git clone
$ cd susi_desktop
      • Install the dependencies listed in package.json file.
$ npm install
      • Start the app using the start script.
$ npm start

Structure of the project

The project was restructured to ensure that the working environment of the Main and Renderer processes are separate which makes the codebase easier to read and debug, this is how the current project is structured.

The root directory of the project contains another directory ‘app’ which contains our electron application. Then we have a package.json which contains the information about the project and the modules required for building the project and then there are other github helper files.

Inside the app directory-

  • Main – Files for managing the main process of the app
  • Renderer – Files for managing the renderer process of the app
  • Resources – Icons for the app and the tray/media files
  • Webview Tag

    Display external web content in an isolated frame and process, this is used to load in a BrowserWindow as

    <webview src=""></webview>

    Adding event listeners to the app

    Various electron APIs were used to give a native feel to the application.

  • Send focus to the window WebContents on focussing the app window.
  • win.on('focus', () => {
  • Display the window only once the DOM has completely loaded.
  • const page = mainWindow.webContents;
    page.on('dom-ready', () => {;
  • Display the window on ‘ready-to-show’ event
  • win.once('ready-to-show', () => {;


    1. A quick article to understand electron’s main and renderer process by Cameron Nokes at Medium link
    2. Official documentation about the webview tag at
    3. Read more about electron processes at
    4. SUSI Desktop repository at

    How to Store Mobile Settings in the Server from SUSI Web Chat Settings Page

    While we are adding new features and capabilities to SUSI Web Chat application, we wanted to provide settings changing capability to SUSI users. SUSI team decided to maintain a settings page to give that capability to users.

    This is how it’s interface looks like now.

    In this blog post I’m going to add another setting category to our setting page. This one is for  saving mobile phone number and dial code in the server.

    UI Development:

    First we need to  add new category to settings page and it should be invisible when user is not logged in. Anonymous users should not get mobile phone category in settings page.

         let menuItems = cookies.get('loggedIn') ?
                    <div className="settings-list">
                            style={{ width: '100%' }}
                           <MenuItem value='Mobile' className="setting-item" leftIcon={<MobileIcon />}>Mobile<ChevronRight className="right-chevron" /></MenuItem>
                            <hr className="break-line" />


    Next we have to show settings UI when user clicks on the category name.

     if (this.state.selectedSetting === 'Mobile' && cookies.get('loggedIn')) {}
                    currentSetting = (
      <Translate text="Country/region : " />
                                <DropDownMenu maxHeight={300}


    Show US if the state does not deines the country code

    <Translate text="Phone number : " />
                                <TextField name="selectedCountry"
                                value={countryData.countries[this.state.countryCode?this.state.countryCode:'US'].countryCallingCodes[0] }
                                <TextField name="serverUrl"
                                    value={this.state.phoneNo }


    Then we need to get list of country names and country dial codes to show in the above drop down. We used country-data node module for that.

    To install country-data module use this  command.

    npm install --save country-data


    We have used it in the settings page as below.

    import countryData from 'country-data';
        	countryData.countries.all.sort(function(a, b) {
                if( <{ return -1};
                if( >{ return 1};
                return 0;
            let countries =, i) => {
             	return (<MenuItem value={countryData.countries.all[i].alpha2} key={i} primaryText={ countryData.countries.all[i].name+' '+ countryData.countries.all[i].countryCallingCodes[0] } />);


    First we sort the country data list from it’s name. After that we made a list of “”s from this list of data.
    Then we have to check whether the user changed or added the phone number and region (dial code).
    It handles by this function mentioned above. ( onChange={this.handleCountryChange}> and
    onChange={this.handleTelephoneNoChange} )

        handleCountryChange = (event, index, value) => {
            this.setState({'countryCode': value });


    Then we have to get the phone number using below function.

        handleTelephoneNoChange = (event, value) => {
            this.setState({'phoneNo': value});


    Next we have to update the function that triggers when user clicks the save button.

        handleSubmit = () => {
            let newCountryCode = !this.state.countryCode?
            let newCountryDialCode = !this.state.countryDialCode?
            let newPhoneNo = this.state.phoneNo;
            let vals = {
                countryCode: newCountryCode,
                countryDialCode: newCountryDialCode,
                phoneNo: newPhoneNo
    let settings = Object.assign({}, vals);
    cookies.set('settings', settings);


    This code snippet stores Country Code, Country Dial code and phone no in the server.
    Now we have to update the Store. Here we are going to change UserPreferencesStore “UserPreferencesStore” .
    First we have to setup default values for things we are going to store.

    let _defaults = {
    	  CountryCode: 'US',
       	  CountryDialCode: '+1',
       	  PhoneNo: ''


    Finally we have to update the dispatchToken to change and get these new data

    UserPreferencesStore.dispatchToken = ChatAppDispatcher.register(action => {
       switch (action.type) {
           case ActionTypes.SETTINGS_CHANGED: {
               let settings = action.settings;
                       _defaults.Theme = settings.theme;
                   _defaults.countryDialCode = settings.countryDialCode;
                   _defaults.phoneNo = settings.phoneNo;
                   _defaults.countryCode = settings.countryCode;


    Finally application is ready to store and update Mobile phone number and region code in the server.


    SUSI.AI Chrome Bot and Web Speech: Integrating Speech Synthesis and Recognition

    Susi Chrome Bot is a Chrome extension which is used to communicate with Susi AI. The advantage of having chrome extensions is that they are very accessible for the user to perform certain tasks which sometimes needs the user to move to another tab/site.

    In this blog post, we will be going through the process of integrating the web speech API to SUSI Chromebot.

    Web Speech API

    Web Speech API enables web apps to be able to use voice data. The Web Speech API has two components:

    Speech Recognition:  Speech recognition gives web apps the ability to recognize voice data from an audio source. Speech recognition provides the speech-to-text service.

    Speech Synthesis: Speech synthesis provides the text-to-speech services for the web apps.

    Integrating speech synthesis and speech recognition in SUSI Chromebot

    Chrome provides the webkitSpeechRecognition() interface which we will use for our speech recognition tasks.

    var recognition = new webkitSpeechRecognition();


    Now, we have a speech recognition instance recognition. Let us define necessary checks for error detection and resetting the recognizer.

    var recognizing;
    function reset() {
    recognizing = false;
    recognition.onerror = function(e){
    recognition.onend = function(){


    We now define the toggleStartStop() function that will check if recognition is already being performed in which case it will stop recognition and reset the recognizer, otherwise, it will start recognition.

    function toggleStartStop() {
        if (recognizing) {
        } else {
          recognizing = true;


    We can then attach an event listener to a mic button which calls the toggleStartStop() function to start or stop our speech recognition.

    mic.addEventListener("click", function () {


    Finally, when the speech recognizer has some results it calls the onresult event handler. We’ll use this event handler to catch the results returned.

    recognition.onresult = function (event) {
        for (var i = event.resultIndex; i < event.results.length; ++i) {
          if (event.results[i].isFinal) {
            textarea.value = event.results[i][0].transcript;


    The above code snipped tests for the results produced by the speech recognizer and if it’s the final result then it sets textarea value with the result of speech recognition and then we submit that to the backend.

    One problem that we might face is the extension not being able to access the microphone. This can be resolved by asking for microphone access from an external tab/window/iframe. For SUSI Chromebot this is being done using an external tab. Pressing on the settings icon makes a new tab which then asks for microphone access from the user. This needs to be done only once, so that does not cause a lot of trouble.

    setting.addEventListener("click", function () {
    url: chrome.runtime.getURL("options.html")
    audio: true
    }, function(stream) {
    }, function () {
    console.log('no access');


    In contrast to speech recognition, speech synthesis is very easy to implement.

    function speakOutput(msg){
        var voiceMsg = new SpeechSynthesisUtterance(msg);


    This function takes a message as input, declares a new SpeechSynthesisUtterance instance and then calls the speak method to convert the text message to voice.

    There are many properties and attributes that come with this speech recognition and synthesis interface. This blog post only introduces the very basics.



    Store User’s Personal Information with SUSI

    In this blog, I discuss how SUSI.AI stores personal information of it’s users. This personal information is mostly about usernames/links to different websites like LinkedIn, GitHub, Facebook, Google/Gmail etc. To store such details, we have a dedicated API. Endpoint is :

    In this API/Servlet, storing the details and getting the details, both the aspects are covered. At the time of making the API call, user has an option either to ask the server for a list of available store names along with their values or request the server to store the value for a particular store name. If a store name already exists and a client makes a call with new/updated value, The servlet will update the value for that particular store name.

    The reason you are looking at minimal user role as USER is quite obvious, i.e. these details correspond to a particular user. Hence neither we want someone writing such information anonymously nor we want this information to be visible to anonymous user until allowed by the user.

    In the next steps, we start evaluating the API call made by the client. We look at the combination of the parameters present in the request. If the request is to fetch list of available stores, server first checks if Accounting object even has a JSONObject for “stores” or not. If not found, it sends an error message “No personal information is added yet.” and error code 420. Prior to all these steps, server first generates an accounting object for the user. If found, details are encoded as JSONObject’s parameter. Look at the code below to understand things fairly.

    Accounting accounting = DAO.getAccounting(authorization.getIdentity());
            if(post.get("fetchDetails", false)) {
                    JSONObject jsonObject = accounting.getJSON().getJSONObject("stores");
                    json.put("stores", jsonObject);
                    json.put("accepted", true);
                    json.put("message", "details fetched successfully.");
                    return new ServiceResponse(json);
                } else {
                    throw new APIException(420, "No personal information is added yet.");

    If the request was not to fetch the list of available stores, It means client wants server to save a new field or update a previous value for that of a store name. A combination of If-else evaluates whether the call even contains required parameters.

    if (post.get(“storeName”, null) == null) {
    throw new APIException(422, “Bad store name encountered!”);

    String storeName = post.get(“storeName”, null);
    if (post.get(“value”, null) == null) {
    throw new APIException(422, “Bad store name value encountered!”);

    If request contains all the required data, then store name & value are extracted as key-value pair from the request.

    In the next steps, since the server is expected to store list of the store names for a particular user, First the identity is gathered from the already present authorization object in “serviceImpl” method. If the server finds a “null” identity, It throws an error with error code 400 and error message “Specified User Setting not found, ensure you are logged in”.

    Else, server first checks if a JSONObject with key “stores” exists or not. If not, It will create an object and will put the key value pair in the new JSONObject. Otherwise it would anyways do so.

    Since these details are for a particular account (i.e. for a particular user), these are placed in the Accounting.json file. For better knowledge, Look at the code snippet below.

    if (accounting.getJSON().has("stores")) {
                    accounting.getJSON().getJSONObject("stores").put(storeName, value);
                } else {
                    JSONObject jsonObject = new JSONObject(true);
                    jsonObject.put(storeName, value);
                    accounting.getJSON().put("stores", jsonObject);
                json.put("accepted", true);
                json.put("message", "You successfully updated your account information!");
                return new ServiceResponse(json);

    Additional Resources :