Adding Global Search and Extending Bookmark Views in Open Event Android

When we design an application it is essential that the design and feature set enables the user to find all relevant information she or he is looking for. In the first versions of the Open Event Android App it was difficult to find the Sessions and Speakers related to a certain Track. It was only possible to search for them individually. The user also could not view bookmarks on the Main Page but had to go to a separate tab to view them. These were some capabilities I wanted to add to the app. In this post I will outline the concepts and advantages of a Global Search and a Home Screen in the app. I took inspiration from the Google I/O 2017 App  that had these features already. And, I am demonstrating how I added a Home Screen which also enabled users to view their bookmarks on the Home Screen itself. Global Search v/s Local Search                   If we observe clearly in the above images we can see there exists a stark difference in the capabilities of each search. See how in the Local Search we are just able to search within the Tracks section and not anything else. This is fixed in the Global Search page which exists along with the new home screen. As all the results that a user might need are obtained from a single search, it improves the overall user-experience of the app. Also a noticeable feature that was missing in the current iteration of the application was that a user had to go to a separate tab to view his/her bookmarks. It would be better for the app to have a home page detailing all the Event’s/Conference’s details as well as display user bookmarks on the homepage. New Home                                     The above posted images/gifs indicate the functioning and the UI/UX of the new Homescreen within the app. Currently I am working to further improve the way the Bookmarks are displayed. The new home screen provides the user with the event details i.e FOSSASIA 2017 in this case. This would be different for each conference/event and the data is fetched from the open-event-orga server(the first part of the project) if it doesn’t already exist in the JSON files provided in the assets folder of the application. All the event information is being populated by the JSON files provided in the assets folder in the app directory structure. config.json sponsors.json microlocations.json event.json(this stores the information that we see on the home screen) sessions.json speakers.json track.json All the file names are descriptive enough to denote what do all of them store.I hope that I have put forward why the addition of a New Home with Bookmarks along with the Global Search feature was a neat addition to the app. Link to PR for this feature : https://github.com/fossasia/open-event-android/pull/1565 Resources https://guides.codepath.com/android/Heterogenous-Layouts-inside-RecyclerView https://developer.android.com/training/search/index.html  …

Continue ReadingAdding Global Search and Extending Bookmark Views in Open Event Android

Using Ember.js Components in Open Event Frontend

Ember.js is a comprehensive JavaScript framework for building highly ambitious web applications. The basic tenet of Ember.js is convention over configuration which means that it understands that a large part of the code, as well as development process, is common to most of the web applications. Talking about the components which are nothing but the elements whose role remain same with same properties and functions within the entire project. Components allow developers to bundle up HTML elements and styles into reusable custom elements which can be called anywhere within the project. In Ember, the components consist of two parts: some JavaScript code and an HTMLBars template. The JavaScript component file defines the behaviour and properties of the component. The behaviours of the component are typically defined using actions. The HTMLBars file defines the markup for the component's UI. By default, the component will be rendered into a 'div' tag element, but a different element can be defined if required. A great thing about templates in Ember is that other components can be called inside of a component's template. To call a component in an Ember app, we must use {{curly-brace-syntax}}. By design, components are completely isolated which means that they are not directly affected by any surrounding CSS or JavaScript. Let’s demonstrate a basic Ember component in reference to Open Event Frontend Project for displaying the text as a popup. The component will render a simple text view which will display the entire text. The component is designed with the purpose that many times due to unavailability of space we’re unable to show the complete text so such cases the component will compare the available space with the space required by the whole text view to display the text. If in case the available space is not sufficient then the text will be ellipsized and on hovering the text a popup will appear where the complete text can be seen. Generating the component The component can be generated using the following command: $ ember g component smart-overflow Note: The components name needs to include a hyphen. This is an Ember convention, but it is an important one as it'll ensure there are no naming collisions with future HTML elements.This will create the required .js and .hbs files needed to define the component, as well as an Ember integration test. The Component Template In the app/templates/components/smart-overflow.hbs file we can create some basic markup to display the text when the component is called. <span> {{yield}} </span> The {{yield}} is handlebars expressions which will be helpful in rendering the data to display when the component is called. The JavaScript Code In the app/components/smart-overflow.js file, we will define the how the component will work when it is called. import Ember from 'ember'; const { Component } = Ember; export default Component.extend({ classNames: ['smart-overflow'], didInsertElement() { this._super(...arguments); var $headerSpan = this.$('span'); var $header = this.$(); $header.attr('data-content', $headerSpan.text()); $header.attr('data-variation', 'tiny'); while ($headerSpan.outerHeight() > $header.height()) { $headerSpan.text((index, text) => { return text.replace(/\W*\s(\S)*$/, '...'); }); $header.popup({ position: 'top…

Continue ReadingUsing Ember.js Components in Open Event Frontend

Forms and their validation using Semantic UI in Open Event Frontend

A web form acts as a communication bridge that allows a user to communicate with the organisation and vice versa. In the Open Event project, we need forms so users can contact the organisation, to register themselves, to log into the website, to order a ticket or to query for some information. Here are a few things which were kept in mind before we designed forms in the Open Event Frontend Project: The forms were designed on the principle of keeping it simple which means that it should ask only for the relevant information which is required in actual. They contained the relevant fields ordered in a logical way according to their importance. They offered clear error messages instantly to give direct feedback and allow users to make instant corrections. The clear examples were shown in the front of the field. Proper spacing among the fields was maintained to display proper error messages to the respective form fields. The mandatory fields are highlighted using ‘*’ to avoid confusion. Proper colour combinations have been used to inform the user about the progress while filling the form. For eg. red for any ‘error or incomplete’ information while green signifies ‘correct’. Saving the current data in case the user has to go back to make any corrections later. Allowing to toggle through the form using the keyboard. The above designing principles helped in avoiding the negative user experience while using the forms. Let’s take a closer look at the form and the form validation in case of purchase a new ticket form on the Orders page in Open Event Front-end application. Creating a form Let’s start by writing some HTML for the form: <form class="ui form" {{action 'submit' on='submit' }}> <div class="ui padded segment"> <h4 class="ui horizontal divider header"> <i class="ticket icon"></i> {{t 'Ticket Buyer'}} </h4> <div class="field"> <label class="required" for="firstname">{{t 'First Name'}}</label> {{input type='text' name='first_name' value=buyer.firstName}} </div> <div class="field"> <label class="required" for="lastname">{{t 'Last Name'}}</label> {{input type='text' name='last_name' value=buyer.lastName}} </div> <div class="field"> <label class="required" for="email">{{t 'Email'}}</label> {{input type='text' name='email' value=buyer.email}} </div> <h4 class="ui horizontal divider header"> <i class="ticket icon"></i> {{t 'Ticket Holder\'s Information'}} </h4> {{#each holders as |holder index|}} <div class="inline field"> <i class="user icon"></i> <label>{{t 'Ticket Holder '}}{{inc index}}</label> </div> <div class="field"> <label class="required" for="firstname">{{t 'First Name'}}</label> {{input type='text' name=(concat 'first_name_' index) value=holder.firstName}} </div> <div class="field"> <label class="required" for="lastname">{{t 'Last Name'}}</label> {{input type='text' name=(concat 'last_name_' index) value=holder.lastName}} </div> <div class="field"> <label class="required" for="email">{{t 'Email'}}</label> {{input type='text' name=(concat 'email_' index) value=holder.email}} </div> <div class="field"> {{ui-checkbox label=(t 'Same as Ticket Buyer') checked=holder.sameAsBuyer onChange=(action 'fillHolderData' holder)}} </div> {{/each}} <p> {{t 'By clicking "Pay Now", I acknowledge that I have read and agree with the Open Event terms of services and privacy policy.'}} </p> <div class="center aligned"> <button type="submit" class="ui teal submit button">{{t 'Pay Now'}}</button> </div> </div> </form>   The complete code for the form can be seen here. In the above code, we have used Semantic UI elements like button, input, label, icon, header and modules like dropdown, checkbox to create the basic structure of…

Continue ReadingForms and their validation using Semantic UI in Open Event Frontend

How to make SUSI AI Line Bot

In order to integrate SUSI’s API with Line bot you will need to have a line account first so that you can follow below procedure. You can download app from here. Pre-requisites: Line app Github Heroku Steps: Install Node.js from the link below on your computer if you haven’t installed it already https://nodejs.org/en/. Create a folder with any name and open shell and change your current directory to the new folder you created. Type npm init in command line and enter details like name, version and entry point. Create a file with the same name that you wrote in entry point in above given step. i.e index.js and it should be in same folder you created. Type following commands in command line  npm install --save @line/bot-sdk. After bot-sdk is installed type npm install --save express after express is installed type npm install --save request when all the modules are installed check your package.json modules will be included within dependencies portion. Your package.json file should look like this. { "name": "SUSI-Bot", "version": "1.0.0", "description": "SUSI AI LINE bot", "main": "index.js", "dependencies": {   "@line/bot-sdk": "^1.0.0",   "express": "^4.15.2",   "request": "^2.81.0" }, "scripts": {   "start": "node index.js" } } Copy following code into file you created i.e index.js 'use strict'; const line = require('@line/bot-sdk'); const express = require('express'); var request = require("request"); // create LINE SDK config from env variables const config = {   channelAccessToken: process.env.CHANNEL_ACCESS_TOKEN,   channelSecret: process.env.CHANNEL_SECRET, }; // create LINE SDK client const client = new line.Client(config); // create Express app // about Express: https://expressjs.com/ const app = express(); // register a webhook handler with middleware app.post('/webhook', line.middleware(config), (req, res) => {   Promise       .all(req.body.events.map(handleEvent))       .then((result) => res.json(result)); }); // event handler function handleEvent(event) {   if (event.type !== 'message' || event.message.type !== 'text') {       // ignore non-text-message event       return Promise.resolve(null);   }   var options1 = {       method: 'GET',       url: 'http://api.asksusi.com/susi/chat.json',       qs: {           timezoneOffset: '-330',           q: event.message.text       }   };   request(options, function(error, response, body) {       if (error) throw new Error(error);       // answer fetched from susi       //console.log(body);       var ans = (JSON.parse(body)).answers[0].actions[0].expression;       // create a echoing text message       const answer = {           type: 'text',           text: ans       };       // use reply API       return client.replyMessage(event.replyToken, answer);   }) } // listen on port const port = process.env.PORT || 3000; app.listen(port, () => {   console.log(`listening on ${port}`); }); Now we have to get channel access token and channel secret to get that follow below steps. If you have Line account then move to next step else sign up for an account and make one. Create Line account on  Line Business Center with messaging API and follow these steps: In the Line Business Center, select Messaging API under the Service category at the top of the page. Select start using messaging API, enter required information and confirm it. Click LINE@ Manager option, In settings go to bot settings and Enable messaging API Now we have to configure settings. Allow messages using webhook and select allow for “Use Webhooks”. Go to Accounts option at top of page and open LINE Developers. To get Channel access…

Continue ReadingHow to make SUSI AI Line Bot

Designing Control UI of PSLab Android using Moqups

Mockups are an essential part of app development cycle. With numerous mock-up tools available for android apps (both offline and online), choosing the right mock-up tool becomes quite essential. The developers need a tool that supports the latest features like drag & drop elements, support collaboration upto some extent and allow easy sharing of mockups. So, Moqups was chosen as the mockups tool for the PSLab Android team. Like other mock-up tools available in the market, using moqups is quite simple and it’s neat & simple user interface makes the job easier. This blog discusses some of the important aspects that need to be taken care of while designing mockups. A typical online mock-up tool would look like this having a palette to drag & drop UI elements like Buttons, Text boxes, Check boxes etc. Additionally a palette to modify the features of each element ( here on the right ) and other options at the top related to prototyping, previewing etc. The foremost challenge while designing any mock-up is to keep the design neat and simple such that even a layman doesn’t face problems while using it. A simple UI is always appealing and the current trend of UIs is creating flat & crisp UIs. For example, the above mock-up design has numerous advantages for both a user and also as a programmer. There are seek bars as well as text boxes to input the values along with the feature of displaying the value that actually gets implemented and it’s much simpler to use. From the developer’s perspective, presence of seven identical views allows code reuse. A simple layout can be designed for one functionality and since all of them are identical, the layout can be reused in a Recyclerview. The above design is a portion of the Control UI which displays the functionalities for  using PSLab as a function generator and as a voltage/current source. The other section of the UI is of the Read portion. This has the functionalities to measure various parameters like voltage, resistance, capacitance, frequency and counting pulses. Here, drop-down boxes have been provided at places where channel selection is required. Since voltages are most commonly measured values in any experiment, voltages of all the channels have been displayed simultaneously. Attempts should always be made to keep the smaller views as identical as possible since it becomes easier for the developer to implement it and also for the user to understand.   The Control UI has an Advanced Section which has features like Waveform Generators allows to generate sine/square waves of a given frequency & phase, Configuring Pulse Width Modulation (PWM)  and selecting the Digital output channel. Since, the use of such features are limited to higher level experiments, they have been separately placed in the Advanced section. Even here drop-down boxes, text boxes & check boxes have been used to make UI look interactive. The common dilemma faced while writing the XML file is regarding the view type to be chosen as Android provides…

Continue ReadingDesigning Control UI of PSLab Android using Moqups

Using ButterKnife in PSLab Android App

ButterKnife is an Android Library which is used for View injection and binding. Unlike Dagger, ButterKnife is limited to views whereas Dagger has a much broader utility and can be used for injection of anything like views, fragments etc. Being limited to views makes it much more simpler to use compared to dagger. The need for using ButterKnife in our project PSLab Android was felt due to the fact binding views would be much more simpler in case of layouts like that of Oscilloscope Menu which has multiple views in the form of Textboxes, Checkboxes, Seekbars, Popups etc. Also, ButterKnife makes it possible to access the views outside the file in which they were declared. In this blog, the use of ButterKnife is limited to activities and fragments. ButterKnife can used anywhere we would have otherwise used findViewById(). It helps in preventing code repetition while instantiating views in the layout. The ButterKnife blog has neatly listed all the possible uses of the library. The added advantage of using Butterknife are- The hassle of using Boilerplate code is not needed and the code volume is reduced significantly in some cases. Setting up ButterKnife is quite easy as all it takes is adding one dependency to your gradle file. It also has other features like Resource Binding (i.e binding String, Color, Drawable etc.). Other uses like simplification of code while using buttons. For example, there is no need of using findViewById and setOnClickListener, simple annotation of the button ID with @OnClick does the task. Using butterknife was essential for PSLab Android since for the views shown below which has too many elements, writing boilerplate code can be a very tedious task. Using ButterKnife in activities The PSLab App defines several activities like MainActivity, SplashActivity, ControlActivity etc. each of which consists of views that can be injected with ButterKnife. After setting up ButterKnife by adding dependencies in gradle files, import these modules in every activity. import butterknife.BindView; import butterknife.ButterKnife; Traditionally, views in Android are defined as follows using the ID defined in a layout file. For this findViewById is used to retrieve the widgets. private NavigationView navigationView; private DrawerLayout drawer; private Toolbar toolbar; navigationView = (NavigationView) findViewById(org.fossasia.pslab.R.id.nav_view); drawer = (DrawerLayout) findViewById(org.fossasia.pslab.R.id.drawer_layout); toolbar = (Toolbar) findViewById(org.fossasia.pslab.R.id.toolbar); However, with the use of Butterknife, fields are annotated with @BindView and a View ID for finding and casting the view automatically in the layout files. After the annotation, finding and casting of views ButterKnife.bind(this) is called to bind the views in the corresponding activity. @BindView(R.id.nav_view) NavigationView navigationView; @BindView(R.id.drawer_layout) DrawerLayout drawer; @BindView(R.id.toolbar) Toolbar toolbar; setContentView(R.layout.activity_main); ButterKnife.bind(this); Using ButterKnife in fragments The PSLab Android App implements ApplicationFragment, DesignExperimentsFragment, HomeFragment etc., so ButterKnife for fragments is also used. Using ButterKnife is fragments is slightly different from using it in activities as the binded views need to be destroyed on leaving the fragment as the fragments have different life cycle( Read about it here). For this an Unbinder needs to be defined to unbind the views before they are destroyed. Quoting…

Continue ReadingUsing ButterKnife in PSLab Android App

Prototyping PSLab Android App using Invision

Often, while designing apps, we need planning and proper designing before actually building the apps. This is where mock-up tools and prototyping is useful. While designing user interfaces, the first step is usually creating mockups. Mockups give quite a good idea about the appearance of various layouts of the app. However, mockups are just still images and they don’t give a clue about the user experience of the app. This is where prototyping tools are useful. Prototyping helps to get an idea about the user experience without actually building the app. Invision is an online prototyping service which was used for initial testing of the PSLab Android app. Some of pictures below are the screenshots of our prototype taken in Invision. Since, it supports collaboration among developers, it proves to be a very useful tool.  Using Invision is quite simple. Visit the Invision website and sign up for an account.Before using invision for prototyping, the mockups of the UI layouts must be ready since Invision is simply meant for prototyping and not creating mockups. There are a lot of mock-up tools available online which are quite easy to use. Create a new project on Invision. Select the project type - Prototype in this case followed by selecting the platform of the project i.e. Android, iOS etc. Collaborators can be added to a project for working together. After project creation and adding collaborators is done with, the mock-up screens can be uploaded to the project directory Select any mock-up screen, the window below appears, there are a few modes available in the bottom navbar - Preview mode, Build mode, Comment mode, Inspect Mode and History Mode. Preview Mode - View your screen and test the particular screen prototype. Build Mode - Assign functionality to buttons, navbars, seek bars, check boxes etc. and other features like transitions. Comment Mode - Leave comments/suggestions regarding performance/improvement for other collaborators to read. Inspect mode - Check for any unforeseen errors while building. History Mode - Check the history of changes on the screen. Switch to the build mode, it would now prompt to click & drag to create boxes around buttons, check boxes, seek bars etc,(shown above). Once a box ( called as “hotspot” in Invision ), a dialog box pops up asking to assign functionalities. The hotspot/box which was selected must link to another menu/layout or result in some action like app closing. This functionality is provided by the “Link To:” option. Then the desired gesture for activating the hotspot is selected which can be tap for buttons & check boxes, slide for navbars & seek bars etc from the “Gesture:” option. Lastly, the transition resulting due to moving from the hotspot to the assigned window in “Link To:” is selected from the “Transition:” menu. This process can be repeated for all the screens in the project. Finally for testing and previewing the final build, the screen which appears when the app starts is selected and further navigation, gestures etc. are tested there. So, building prototypes is quite…

Continue ReadingPrototyping PSLab Android App using Invision

Adding Send Button in SUSI.AI webchat

Our SUSI.AI web chat app is improving day by day. One such day it looked like this:  It replies to your query and have all the basic functionality, but something was missing. When viewed in mobile, we realised that this should have a send button. Send buttons actually make chat apps look cool and give them their complete look. Now a method was defined in MessageCompose Component of React App, which took the target value of  textarea and pass it as props. Method: _onClickButton(){ let text = this.state.text.trim(); if (text) { Actions.createMessage(text, this.props.threadID); } this.setState({text: ''}); } Now this method was to be called in onClick Action of our send Button, which was included in our div rendered by MessageComposer Component. This method will also be called on tap on ENTER key on keyboard. Implementation of this method has also been done, this can be seen here. Why wrap textarea and button in a div and not render as two independent items ? Well in react you can only render single components, so wrapping them in a div is our only option. Now since we had our functionality running, It was time for styling. Our team choose to use http://www.material-ui.com/ and it’s components for styling. We chose to have FloatingActionButton as send button. Now to use components of material ui in our component, several importing was to be done. But to enable these feature we needed to change our render to DOM to :- import MuiThemeProvider from 'material-ui/styles/MuiThemeProvider'; const App = () => (  <MuiThemeProvider>    <ChatApp />  </MuiThemeProvider> ); ReactDOM.render(  <App /> ,  document.getElementById('root') ); Imports in our MessageComposer looked like this :- import Send from 'material-ui/svg-icons/content/send'; import FloatingActionButton from 'material-ui/FloatingActionButton'; import injectTapEventPlugin from 'react-tap-event-plugin'; injectTapEventPlugin(); The injectTapEventPlugin is very important method, in order to have event handler’s in our send button, we need to call this method and method which handles onClick event  is know as onTouchTap. The JSX code which was to be rendered looked like this: <div className="message-composer">        <textarea          name="message"          value={this.state.text}          onChange={this._onChange.bind(this)}          onKeyDown={this._onKeyDown.bind(this)}          ref={(textarea)=> { this.nameInput = textarea; }}          placeholder="Type a message..."        />        <FloatingActionButton          backgroundColor=' #607D8B'          onTouchTap={this._onClickButton.bind(this)}          style={style}>          <Send />        </FloatingActionButton>      </div> Styling for button was done separately and it looked like: const style = {    mini: true,    top: '1px',    right: '5px',    position: 'absolute', }; Ultimately after successfully implementing all of this our SUSI.AI web chat had a good looking FloatingAction send Button. This can be tested here.

Continue ReadingAdding Send Button in SUSI.AI webchat

Integrating an Image Editing Page in Phimpme Android

The main aim of the Phimpme is to develop image editing and sharing application as an alternative to proprietary solutions like Instagram. Any user can choose a photo from the gallery or click a picture from the camera and upload it on the various social media platform including Drupal and wordpress. As most of the image editor applications in the app store currently my team and I discussed and listed down the basic functionality of the Image editing activity. We have listed down the following features for image Editing activity: Filters. Stickers Image tuning Choosing the Image Editing Application There are number of existing Open Source projects that we went through to check how they could be integrated into Phimpme. We looked into those projects which are licensed under the  MIT Licence. As per the MIT Licence the user has the liberty to modify the use the code, modify it, merge, publish it without any restrictions. Image-Editor Android is one such application which has MIT Licence. The Image-Editor Android has extensive features for manipulating and enhancing the image. The features are as follows: Edit Image by drawing on it. Applying stickers on the image. Applying filters. Crop. Rotating the image. Text on the image. It is an ideal application to be implemented in our project. The basic flow of the application First, getting the image either by gallery or camera. The team has implemented leafPic and openCamera. Second, redirecting the image from the leafPic gallery to the Image editing activity by choosing edit option from the popup menu. Populating the Menu in the popup menu in XML: <menu> tag is the root node, which contains ites in the popup menu. The following code is used to populate the menu: <?xml version="1.0" encoding="utf-8"?> <menu xmlns:android="http://schemas.android.com/apk/res/android">    <item android:id="@+id/action_edit"          android:icon="@drawable/ic_edit"          android:title="@string/Edit"          android:showAsAction="ifRoom"/>    <item android:id="@+id/action_use_as"          android:icon="@drawable/ic_use_as"          android:title="@string/useAs" /> </menu> Setting up the Image Editing Activity Image-Editor Android application contains two main sections. MainActivity (To get the image). imageeditlibrary(To edit the image) We need to import imageeditlibrary module. Android studios gives easy method to import a module from any other project using GUI. This can be done as follows: File->new->import module then choosing the module from the desired application. Things to remember after importing any module from any other project: Making sure that the minSdkVersion and targetSdkVersion in the gradle of the imported module and the current working project is same. In Phimpme the minSdkVersion is 16 and tagetSdkVersion is 25, which is used as standard SDK version. Importing all the classes in the used in the imageeditlibrary module before using them in the leadPic gallery. Sending Image to Image Editing Activity This includes three tasks: Handling onclick listeners. Sending the image from the leafPic Activity Receiving the the image in EditImageActivity. Handling onClick Listener: public boolean onOptionsItemSelected(MenuItem item) {        switch (item.getItemId()) { case R.id.action_edit: // setOnclick listener here.  } } Sending Image to EditImageActivity: First we need to get the path of the image to be send. For this we need FileUtils class to…

Continue ReadingIntegrating an Image Editing Page in Phimpme Android

Map Support for SUSI Webchat

SUSI chat client supports map tiles now for queries related to location. SUSI responds with an interactive internal map tile with the location pointed by a marker. It also provides you with a link to open street maps where you can get the whole view of the location using the zooming options provided and also gives the population count for that location. Lets visit SUSI WebChat and try it out. Query : Where is london Response : City of London is a place with a population of 7556900. Here is a map:https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228 Link to Openstreetmap: City of London (Hyperlinked with https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228) <Map Tile> Implementation: How do we know that a map tile is to be rendered? The actions in the API response tell the client what to render. The client loops through the actions array and renders the response for each action accordingly. "actions": [ { "type": "answer", "expression": "City of London is a place with a population of 7556900. Here is a map: https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228" }, { "type": "anchor", "link": "https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228", "text": "Link to Openstreetmap: City of London" }, { "type": "map", "latitude": "51.51279067225417", "longitude": "-0.09184009399817228", "zoom": "13" } ] Note: The API response has been trimmed to show only the relevant content. The first action element is of type answer so the client renders the text response, ‘City of London is a place with a population of 7556900. Here is a map: https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228’ The second action element is of type anchor with the text to display and the link to hyperlink specified by the text and link attributes, so the client renders the text `Link to Openstreetmap: City of London`, hyperlinked to "https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228". Finally, the third action element is of type map. Latitude, Longitude and zoom level information are also  specified using latitude, longitude and zoom attributes. The client renders a map using these attributes. I used ‘react-leaflet’ module to render the interactive map tiles. To integrate it into our project and set the required style for the map tiles, we need to load Leaflet’s CSS style sheet and we also need to include height and width for the map component.  <link rel="stylesheet" href="http://cdn.leafletjs.com/leaflet/v0.7.7/leaflet.css" /> .leaflet-container { height: 150px; width: 80%; margin: 0 auto; } case 'map': { let lat = parseFloat(data.answers[0].actions[index].latitude); let lng = parseFloat(data.answers[0].actions[index].longitude); let zoom = parseFloat(data.answers[0].actions[index].zoom); let mymap = drawMap(lat,lng,zoom); listItems.push( <li className='message-list-item' key={action+index}> <section className={messageContainerClasses}> {mymap} <p className='message-time'> {message.date.toLocaleTimeString()} </p>; </section> </li> ); break; } import { divIcon } from 'leaflet'; import { Map, Marker, Popup, TileLayer } from 'react-leaflet'; // Draw a Map function drawMap(lat,lng,zoom){ let position = [lat, lng]; const icon = divIcon({ className: 'map-marker-icon', iconSize: [35, 35] }); const map = ( <Map center={position} zoom={zoom}> <TileLayer attribution='' url='http://{s}.tile.osm.org/{z}/{x}/{y}.png'      /> <ExtendedMarker position={position} icon={icon}> <Popup> <span><strong>Hello!</strong> <br/> I am here.</span> </Popup> </ExtendedMarker> </Map> ); return map; } Here, I used a custom marker icon because the default icon provided by leaflet had an issue and was not being rendered. I used divIcon from leaflet to create a custom map marker icon. When the…

Continue ReadingMap Support for SUSI Webchat