Adding Custom Scrollbar to SUSI AI Web Chat

Scrollbar represents the depth of content on your current screen. It appears when the content has overflown the depth of screen and cannot fit it anymore. We see scrollbars everywhere. By default, the scrollbar provided by the browser is not very attractive but efficient in doing its job.

We decided that as our SUSI.AI Web App is improving in both UI and functionality, let’s add a custom scrollbar to it.

Earlier we had a standard  scrollbar in our SUSI.AI webchat:

For adding a custom scrollbar to our web chat we decided to use react-custom-scrollbars npm-package.

Our reasons to choose this package were:

  • Auto Hide feature in the scrollbar, after a specific period of time, which we can modify too.
  • No requirement for extra CSS styles to style our scrollbar.
  • It is well tested and trusted by many developers in open source

To install this npm package:

npm install -S react-custom-scrollbars 

Now comes the usage part, we need to import this into our JavaScript file:

 import { Scrollbars } from 'react-custom-scrollbars';

After importing, wrap it around the data where you want to show a custom scrollbar. In our case it was messageListItems, the code snippet looked like:

<Scrollbars>
 {messageListItems}
</Scrollbars>

This made our scrollbar look much better than the default one:

Now to add Auto Hide functionality to our scrollbar we need to add some attributes to our <Scrollbars>  tag.

    1. autoHide: It allows the auto-hide feature to our scrollbar.
    2. autoHideTimeout: It allows us to set custom time of hiding delay of a scrollbar in milli-seconds.
    3. autoHideDuration: it allows us to set the duration of hiding animation in milliseconds.

After adding the above-mentioned attributes our code changes to:

<Scrollbars
 autoHide
 autoHideTimeout={1000}
 autoHideDuration={200}>
 {messageListItems}
</Scrollbars>

Resources:

A lot more of custom attributes can be found in the documentation of Malte Wessel here.

Testing Link:

Now we had a much better scrollbar for our web chat which can be tested here.

How SUSI AI Searches the Web For You

SUSI is now capable of performing web search to answer your queries. When SUSI doesn’t know how to answer your queries, it performs a web search on the client side and displays all the results as horizontally swipeable tiles with each tile giving a brief description and also providing a link to the relevant source.

Lets visit SUSI WebChat and try it out.

Query : Search for Google
Response : <Web Search Results>

How does SUSI know when to perform a websearch?

It uses action types to identify if a web search is to be performed or not. The API Response is parsed to check for the action types and if a websearch action type is present, then an API call is made using the duckduckgo api with the relevant query and the results are displayed as tiles with :

  • Category : The Topic related to the given query
  • Text : The result from the websearch
  • Image : Image related to the query if present
  • Link : A url redirecting to the relevant source

Parsing the actions :

Let us look at the API response for a query.

Sample Query: search for google

Response: <API Response>

"actions": [
  {
    "type": "answer",
    "expression": "Here is a web search result:"
  },
  {
    "type": "websearch",
    "query": "google"
  }
]

Note: The API Response has been trimmed to show only the relevant content

We find a websearch type action and the query to be searched as google . So we now make a API call using duckduckgo api to get our websearch results.

API Call Format : api.duckduckgo.com/?q={query}&format=json

API Call for query google : http://api.duckduckgo.com/?q=google&format=json

And from the duckduckgo API response we generate our websearch tiles showing relevant data using the fields present in each object.

This is the sample object from duckduckgo API response under the RelatedTopics , which we use to create our websearch result tiles.

{
  "Result": "<a href=\"https:\/\/duckduckgo.com\/Google\">Google<\/a> An American multinational technology company specializing in Internet-related services and...",
  "Icon": {
    "URL": "https:\/\/duckduckgo.com\/i\/8f85c93f.png",
    "Height": "",
    "Width": ""
  },
  "FirstURL": "https:\/\/duckduckgo.com\/Google",
  "Text": "Google An American multinational technology company specializing in Internet-related services and..."
},

Let us look at the code for querying data from the API

if (actions.indexOf('websearch')>=0) {

  let actionIndex = actions.indexOf('websearch');
  let query = response.answers[0].actions[actionIndex].query;

  $.ajax({
    url: 'http://api.duckduckgo.com/?format=json&q='+query,
    dataType: 'jsonp',
    crossDomain: true,
    timeout: 3000,
    async: false,

    success: function (data) {
      receivedMessage.websearchresults = data.RelatedTopics;

      if(data.AbstractText){
        let abstractTile = {
          Text:'',
          FirstURL:'',
          Icon:{URL:''},
        }
        abstractTile.Text = data.AbstractText;
        abstractTile.FirstURL = data.AbstractURL;
        abstractTile.Icon.URL = data.Image;
        receivedMessage.websearchresults.unshift(abstractTile);
    }

    let message =  ChatMessageUtils.getSUSIMessageData(
receivedMessage, currentThreadID);

    ChatAppDispatcher.dispatch({
      type: ActionTypes.CREATE_SUSI_MESSAGE,
      message
    });
  },

    error: function(errorThrown) {
      console.log(errorThrown);
      receivedMessage.text = 'Please check your internet connection';
    }

  });

}

Here, from the actions object, we get the query needed to search the web. We then make a ajax call using that query to the duckduckgo API. If the API call succeeds then we collect the required data to create tiles as array of objects and store it as websearchresults. and dispatch the message with the websearchresults which gets reflected in the message store and when passed to the components we use it to create the result tiles.

<MuiThemeProvider>
  <Paper zDepth={0} className='tile'>
    <a rel='noopener noreferrer'
    href={tile.link} target='_blank'
    className='tile-anchor'>
    {tile.icon &&
    (<Paper className='tile-img-container'>
      <img src={tile.icon}
      className='tile-img' alt=''/>
     </Paper>
    )}
  <Paper className='tile-text'>
    <p className='tile-title'>
      <strong>
        {processText(tile.title,'websearch-rss')}
      </strong>
    </p>
    {processText(tile.description,'websearch-rss')}
  </Paper>
  }
  </a>
  </Paper>
</MuiThemeProvider>

We then display the tiles as horizontally swipeable carousel ensuring a good and interactive UX.

React-Slick module was used to implement the horizontal swiping feature.

function renderTiles(tiles){

if(tiles.length === 0){
  let noResultFound = 'NO Results Found';
  return(<center>{noResultFound}</center>);
}

let resultTiles = drawTiles(tiles);

var settings = {
  speed: 500,
  slidesToShow: 3,
  slidesToScroll: 1,
  swipeToSlide:true,
  swipe:true,
  arrows:false
};

return(
    <Slider {...settings}>
      {resultTiles}
    </Slider>
);

}

Here we are handling the corner case when there are no results to display by rendering `NO Results found`. We then have our web search results displayed as swipeable tiles with a image, title, description and link to the source.

This is how SUSI performs web search to respond to user queries ensuring that no query goes unanswered! Don’t forget to swipe left and go through all the results displayed!

Resources

How SUSI AI Tabulates Answers For You

SUSI is an artificial intelligence chat bot that responds to all kinds of user queries. It isn’t any regular chat bot replying in just plain text. It supports various response types which we refer to as ‘actions’. One such action is the “table” type. When the response to a user query contains a list of answers which can be grouped, it is better visualised as a table rather than plain text.

Lets visit SUSI WebChat and try it out. In our example we ask SUSI for the 2009 race statistics of British Formula 1 racing driver Lewis Hamilton.

Query: race stats of hamilton in f1 season 2009

Response: <table> (API response)

 

 

How does SUSI do that? Let us look at the skill teaching SUSI to give table responses.

# Returns race stats as a table

race summary of  * in f1 season *|race stats of  * in f1 season *
!console:
{
  "url":"http://ergast.com/api/f1/$2$/drivers/$1$/status.json",
  "path":"$.MRData.StatusTable.Status",
  "actions":[{
     "type":"table",
     "columns":{"status":"Race Status","count":"Number Of Races"}
   }]
}
eol

Here, we are telling SUSI that the data type is a table through type attribute in actions and also defining column names and under which column each value must be put using their respective keys. Using this information SUSI generates a response accordingly with the table schema and data points.

How do we know when to render a table?

We know it through the type attribute in the actions from the API response.

"actions": [{
  "type": "table",
  "columns": {
    "status": "Race Status",
    "count": "Number Of Races"
  },
  "count": -1
  }]
}],

We can see that the type is table so we now know that we have to render a table.

But what is the table schema? What do we fill it with?

There is a columns key under actions and from the value of the columns key we get a object whose key value pairs give us column names and what data to put under each column.

Here, we have two columns – Race Status and Number Of Races

And the data to put under each column is found in answers[0].data with same keys as those for each column name i.e ‘status’ and ‘count’.

Sample data object from the API response:

{
  "statusId": "2",
  "count": "1",
  "status": "Disqualified"
}

The value under ‘status’ key is ‘Disqualified’ and the column name for ‘status’ key is ‘Race Status’, so Disqualified is entered under Race Status column in the table. Similarly 1  is entered under Number Of Races column. We thus have a row of our table. We populate the table for each object in the data array using the same procedure.

let coloumns = data.answers[0].actions[index].columns;
let count = data.answers[0].actions[index].count;
let table = drawTable(coloumns,data.answers[0].data,count);

We also have a ’count’ attribute in the API response . This is used to denote how many rows to populate in the table. If count = -1 , then it means infinite or to display all the results.

function drawTable(coloumns,tableData,count){

let parseKeys;
let showColName = true;

if(coloumns.constructor === Array){
  parseKeys = coloumns;
  showColName = false;
}
else{
  parseKeys = Object.keys(coloumns);
}

let tableheader = parseKeys.map((key,i) =>{
return(<TableHeaderColumn key={i}>{coloumns[key]}</TableHeaderColumn>);
});

let rowCount = tableData.length;

if(count > -1){
  rowCount = Math.min(count,tableData.length);
}

let rows = [];

for (var j=0; j < rowCount; j++) {

  let eachrow = tableData[j];

  let rowcols = parseKeys.map((key,i) =>{
    return(
        <TableRowColumn key={i}>
          <Linkify properties={{target:'_blank'}}>
            {eachrow[key]}
          </Linkify>
        </TableRowColumn>
      );
  });

  rows.push(
      <TableRow key={j}>{rowcols}</TableRow>
  );

}

const table =
  <MuiThemeProvider>
    <Table selectable={false}>
      <TableHeader displaySelectAll={false} adjustForCheckbox={false}>
        { showColName && <TableRow>{tableheader}</TableRow>}
      </TableHeader>
      <TableBody displayRowCheckbox={false}>{rows}</TableBody>
    </Table>
  </MuiThemeProvider>

return table;

}

Here we first determine how many rows to populate using the count attribute and then parse the columns to get the column names and keys. We then loop through the data and populate each row.

This is how SUSI responds with tabulated data!

You can create your own table skill and SUSI will give the tabulated response you need. Check out this tutorial to know more about SUSI and the various other action types it supports.

Resources

How to Make SUSI AI Slack Bot

To make SUSI slack bot we will use real time messaging api of slack which will allow users to receive messages from bot in real time. To make SUSI slack bot you have to follow following steps:

Steps:

  1. First of all you have to create a team on slack in where your bot will be running. To create a team go to https://slack.com/ and create a new team.
  2. After creating sign in to your team and got to apps and integration option by clicking on left corner.
  3. Click manage on top right corner and go to custom integrations to add configuration to Bots.
  4. After adding configuration data,bot username and copying API Token now we have to write code for setting bot in slack. To set up code see below steps: 
  5. Install Node.js from the link below on your computer if you haven’t installed it already. https://nodejs.org/en/
  6. Create a folder with any name and open shell and change your current directory to the new folder you created.
  7. Type npm init in command line and enter details like name, version and entry point.
  8. Create a file with the same name that you wrote in entry point in above given step. i.e index.js and it should be in same folder you created.
  9. Type following commands in command line  npm install –save @slack/client. After slack/client is installed type npm install –save express after express is installed type npm install –save request and then npm install –save http when all the modules are installed check your package.json modules will be included within dependencies portion.
  10. Your package.json file should look like this.
    {
    "name": "slack-bot",
    "version": "1.0.0",
    "description": "SUSI Slack Bot",
    "main": "index.js",
    "dependencies": {
           "express": "^4.15.3",
           "http": "0.0.0",
           "request": "^2.81.0"
    },
    "scripts": {
           "test": "echo \"Error: no test specified\" && exit 1",
           "start": "node index.js"
    }
    }
    
  11. Copy following code into file you created i.e index.js.
    var Slack = require('@slack/client');
    var request = require('request');
    var express = require('express');
    var http = require('http');
    var app = express();
    var RtmClient = Slack.RtmClient; 
    var RTM_EVENTS = Slack.RTM_EVENTS;
    var token = process.env.APIToken;
    
    var rtm = new RtmClient(token, { logLevel: 'info' }); 
    rtm.start();
    
    //to ping heorku app after 20 minutes to keep it active
    
    setInterval(function() {
            http.get(process.env.HerokuUrl);
        }, 1200000);
    
    rtm.on(RTM_EVENTS.MESSAGE, function(message) { 
    var channel = message.channel;
    
    var options = {
           method: 'GET',
           url: 'http://api.asksusi.com/susi/chat.json',
           qs: {
               timezoneOffset: '-330',
               q: message.text
           }
       };
    
    //sending request to SUSI API for response
       request(options, function(error, response, body) {
           if (error) throw new Error(error);
           var ans = (JSON.parse(body)).answers[0].actions[0].expression;
           rtm.sendMessage(ans, channel);
       })
    });
    
    const port = process.env.PORT || 3000;
    app.listen(port, () => {
       console.log(`listening on ${port}`);
    });
     
    
    


  12. Now we have to deploy this code to heroku.
  13. Before deploying we have to make a github repository for chatbot to make github repository follow these steps:

    In command line change current directory to folder we created above and write

    git init
    git add .
    git commit -m”initial”
    git remote add origin <URL for remote repository>
    git remote -v
    git push -u origin master

    You will get URL for remote repository by making repository on your github and copying this link of your repository.

  14. To deploy your bot to heroku you need an account on Heroku and after making an account make an app.
  15. Deploy app using github deployment method.
  16. Select Automatic deployment method.
  17. Add APIToken and HerokuUrl variable to heroku app in settings options.
  18. Your SUSI Slack bot is ready enjoy chatting with it.If you want to learn more about slack API refer to https://api.slack.com

OpenLayers 3 Map that Animates Emojis Using LokLak API

OpenLayers3 maps are fully functional maps which offer additional interactive features. In the Emoji Heatmapper app in Loklak Apps, I am using interactive OpenLayers3 maps to visualize the data. In this blog post, I am going to show you how to build an OpenLayers 3 map that animates emojis according to the query entered and location tracked from the LokLak Search API.

We start with a simple map using just one background layer in a clean style.

var map = new ol.Map({
target: 'map',  // The DOM element that will contains the map
renderer: 'canvas', // Force the renderer to be used
layers: [
// Add a new Tile layer getting tiles from OpenStreetMap source
new ol.layer.Tile({
    source: new ol.source.OSM()
}),
vectorLayer
],
// Create a view centered on the specified location and zoom level
view: new ol.View({
    center: ol.proj.transform([2.1833, 41.3833], 'EPSG:4326', 'EPSG:3857'),
    zoom: 2
})
});

 

Sample Output which displays map:

The data set for the locations of tweets containing emoji in them are tracked using search API of LokLak, which is in the form of simplified extract as JSON file. The file contains a list of coordinates named as location_point, the coordinate consists of lat and long values. With the coordinates, we will create a circle point i.e.,marker on the map showing where the emoji have been recently used from the tweets posted.

In the callback of the AJAX request we loop through the list of coordinates. The coordinate of the resulting line string are in EPSG:4326. Usually, when loading vector data with a different projection, OpenLayers will automatically re-project the geometries to the projection of the map. Because we are loading loading the data ourself, we manually have to transform the line to EPSG:3857. Then we could add the feature to the vector source.

for(var i = 0; i < tweets.statuses.length; i++) {
        if(tweets.statuses[i].location_point !== undefined){
            // Creation of the point with the tweet's coordinates
            //  Coords system swap is required: OpenLayers uses by default
            //  EPSG:3857, while loklak's output is EPSG:4326
            var point = new ol.geom.Point(ol.proj.transform(tweets.statuses[i].location_point, 'EPSG:4326', 'EPSG:3857'));
            vectorSource.addFeature(new ol.Feature({  // Add the point to the data vector
                geometry: point
            }));
        }
    }
});

 

Markers on the Map:

We can also style the markers which gets rendered onto the map using the feature ol.style.Style provided by OpenLayers.

var style = new ol.style.Style({
    stroke: new ol.style.Stroke({
        color: [64, 200, 200, 0.5],
        width: 5
    }),
    text: new ol.style.Text({
        font: '30px sans-serif',
        text: document.getElementById('searchField').value !== '' ? document.getElementById('searchField').value : '', //any text can be given here
        fill: new ol.style.Fill({
            color: [64, 64, 64, 0.75]
        })
    })
});

 

Styled Markers on the Map:

So these were a few tips and tricks to use the interactive OpenLayers3 Maps.

The full code of the example is available here.

Resources:

Using Activities with Bottom navigation view in Phimpme Android

Can we use Bottom Navigation View with activities? In the FOSSASIA Phimpme Project we integrated two big projects such as Open Camera and Leafpic. Both are activity based projects and not developed over fragment. The bottom navigation we use will work with fragment.

In Leafpic as well as in Open Camera, there are a lot of Multi level inheritance which runs the app components. The problem faced to continue to work on activities. For theme management using base activity is necessary. The base activity extend to AppcompatActivity. The leafpic code hierarchy is very complex and depend on various activities. Better way is to shift to activities for using Bottom navigation view and  work with activity similar like fragments.

Image source: https://storage.googleapis.com/material-design/

Solution:

This is possible if we copy the bottom navigation item in each of the activity and setting the clicked state of it as per the menu item ID.

Steps to Implement (How I did in Phimpme)

  • Add a Base Activity which takes care of two thing

    • Taking the layout element and setting in the activity.
    • Set up the correct navigation menu item clicked

Function selectBottomNavigationBarItem(int itemId) will take care of what item is currently active. Called this function in onStart(), getNavigationMenuItemId() function will return the menu id and selectBottomNavigagionBarItem update the bottom navigation view.

public abstract class BaseActivity extends AppCompatActivity implements BottomNavigationView.OnNavigationItemSelectedListener

private void updateNavigationBarState() {
   int actionId = getNavigationMenuItemId();
   selectBottomNavigationBarItem(actionId);
}

void selectBottomNavigationBarItem(int itemId) {
   Menu menu = navigationView.getMenu();
   for (int i = 0, size = menu.size(); i < size; i++) {
       MenuItem item = menu.getItem(i);
       boolean shouldBeChecked = item.getItemId() == itemId;
       if (shouldBeChecked) {
           item.setChecked(true);
           break;
       }
   }
}
  • Add two abstract functions in the BaseActivity for above task
public abstract int getContentViewId();

public abstract int getNavigationMenuItemId();
  • Extend every activity to the Base Activity

For example I created a blank Activity: AccountsActivity and extend to BaseActivity

public class AccountsActivity extends BaseActivity
  • Implement the abstract function in the every activity and set the correct layout and menu item id.

@Override
public int getContentViewId() {
 return R.layout.content_accounts;
}

@Override
public int getNavigationMenuItemId() {
   return R.id.navigation_accounts;
}
  • Remove the setContentView(R.layout.activity_main);
  • Add the onNavigationItemSelected in BaseActivity

@Override
public boolean onNavigationItemSelected(@NonNull final MenuItem item) {
   switch (item.getItemId()) {
       case R.id.navigation_home:
    startActivity(new Intent(this, LFMainActivity.class));
           Break;
       case R.id.navigation_camera:
           startActivity(new Intent(this, CameraActivity.class));
           Break;
       case R.id.navigation_accounts:
           startActivity(new Intent(this, AccountsActivity.class));
           Break;
   }
   finish();
   return true;
}

Output:

The transition is as smooth as we use bottom navigation views with Fragment. The bottom navigation also appear on above of the activity.

Source: https://stackoverflow.com/a/42811144/2902518

Resources:

Phimpme: Merging several Android Projects into One Project

To speed up our development of the version 2 of the Phimpme Android app, we decided to use some existing Open Source libraries and projects such as Open Camera app for Camera features and Leafpic for the photo gallery.

Integrating several big projects into one is a crucial step, and it is not difficult. Here are some points to ponder.

  1. Clone project separately. Build project and note down the features which you want to integrate. Explore the manifest file to know the Launcher, services and other permission app is taking.

I explore the Leafpic app manifest file. Found out its Launcher class, and changed it with the our code. Looked at the permissions app required such as Camera, access fine location, write external storage.

  1. Follow Bottom Up approach while integrating. It makes life easier. By the bottom up approach I mean create a new branch and then start deleting the code which is not required. Always work in branch so that no code lost or messed up.

Not everything is required at the moment. So I first integrate the whole leafpic app and then remove the splash screen. Also noted down the features needs to remove such as drawer, search bar etc.

  1. Remove and Commit, In a big project things are interlinked so removing one feature, actually made you remove most of the code. So my strategy is to focus on one feature and when it removed commit the code. This will help you in a reset hard your code anytime.

Like a lot of times I use reset –hard in my code, because it messed up.

  1. Licensing: One thing which is most important in using open source code is their License. Always follow what is written in their License. This is the way we can give credit to their hard work. There are various types of licenses, three famous are Apache, MIT and GNU General Public License. All have their pros and cons.

Apart of License there are some other condition as well. Like in leafpic which we are using in Phimpme has a condition to inform developers about the project in which we are using leafpic.

  1. Aware of File Duplication, sometimes many files have same name which results into file duplication. So refactor them already in the code.

I refactor the MainActivity in leafpic to LFMainActivity to not to clash with the existing. Resolve package name. This is also one of the tedious task. Android Studio already do a lot of refactoring but some part left.

  • Make sure your manifest file contains correct name.
  • The best way to refactor your package name in xml file is to ctrl+Shift+f in Ubuntu or Edit → Find → Find in path in Android Studio. Then search old package name and replace it with new. You can use alt+ j for multi cursor as well.
  • Clean and rebuild project.

Run app at each step so that you can catch the error at a point. Also use debugger for debugging.

Resources

Using Root Directory as the Documentation Directory with Yaydoc

In our test builds for Yaydoc, we found that If we set the root as the documentation directory, the build would fail with a very long build log. In the build process, we create some temporary directories such as a virtual environment and the build directory in the root. After some inspection of the build logs, we found out that when the root is itself used as the documentation directory, we were accidently recursively copying the build directory into itself which led to build failure. Together with this, since the virtual environment directory was also being accidently copied to the build directory, we were actually building the documentation of the entire Python standard library on each build.

Once the problem and It’s cause was known, the course of action to be taken was clear. We needed to ensure that any temporary directories which we create as part of the build process was not being copied to the build directory. The following changes were made to achieve that.

  • The virtual environment directory was now being created in the HOME directory instead of the root.
  • Any other temporary directories which except the main build directory was now deleted before copying.
  • To prevent the recursive copying, we used the –exclude parameter of rsync.
rsync --exclude=BUILD_DIR DOCS_DIR/ BUILD_DIR/

After this patch, root can also be used as the documentation directory with Yaydoc. To do so, just set the environment variable DOCPATH as “.”

Adding Global Search and Extending Bookmark Views in Open Event Android

When we design an application it is essential that the design and feature set enables the user to find all relevant information she or he is looking for. In the first versions of the Open Event Android App it was difficult to find the Sessions and Speakers related to a certain Track. It was only possible to search for them individually. The user also could not view bookmarks on the Main Page but had to go to a separate tab to view them. These were some capabilities I wanted to add to the app.

In this post I will outline the concepts and advantages of a Global Search and a Home Screen in the app. I took inspiration from the Google I/O 2017 App  that had these features already. And, I am demonstrating how I added a Home Screen which also enabled users to view their bookmarks on the Home Screen itself.

Global Search v/s Local Search

Local Search
Global Search

 

 

 

 

 

 

 

 

 

If we observe clearly in the above images we can see there exists a stark difference in the capabilities of each search.
See how in the Local Search we are just able to search within the Tracks section and not anything else.
This is fixed in the Global Search page which exists along with the new home screen.
As all the results that a user might need are obtained from a single search, it improves the overall user-experience of the app. Also a noticeable feature that was missing in the current iteration of the application was that a user had to go to a separate tab to view his/her bookmarks. It would be better for the app to have a home page detailing all the Event’s/Conference’s details as well as display user bookmarks on the homepage.

New Home

Home screen
Home screen with Bookmarks

 

 

 

 

 

 

 

 

 

Home screen with Bookmarks               
Home screen Demo

 

 

 

 

 

 

 

 

 

The above posted images/gifs indicate the functioning and the UI/UX of the new Homescreen within the app.
Currently I am working to further improve the way the Bookmarks are displayed.
The new home screen provides the user with the event details i.e FOSSASIA 2017 in this case. This would be different for each conference/event and the data is fetched from the open-event-orga server(the first part of the project) if it doesn’t already exist in the JSON files provided in the assets folder of the application. All the event information is being populated by the JSON files provided in the assets folder in the app directory structure.

  • config.json
  • sponsors.json
  • microlocations.json
  • event.json(this stores the information that we see on the home screen)
  • sessions.json
  • speakers.json
  • track.json

All the file names are descriptive enough to denote what do all of them store.I hope that I have put forward why the addition of a New Home with Bookmarks along with the Global Search feature was a neat addition to the app.

Link to PR for this feature : https://github.com/fossasia/open-event-android/pull/1565

Resources

 

 

Designing Control UI of PSLab Android using Moqups

Mockups are an essential part of app development cycle. With numerous mock-up tools available for android apps (both offline and online), choosing the right mock-up tool becomes quite essential. The developers need a tool that supports the latest features like drag & drop elements, support collaboration upto some extent and allow easy sharing of mockups. So, Moqups was chosen as the mockups tool for the PSLab Android team.

Like other mock-up tools available in the market, using moqups is quite simple and it’s neat & simple user interface makes the job easier. This blog discusses some of the important aspects that need to be taken care of while designing mockups.

A typical online mock-up tool would look like this having a palette to drag & drop UI elements like Buttons, Text boxes, Check boxes etc. Additionally a palette to modify the features of each element ( here on the right ) and other options at the top related to prototyping, previewing etc.

    • The foremost challenge while designing any mock-up is to keep the design neat and simple such that even a layman doesn’t face problems while using it. A simple UI is always appealing and the current trend of UIs is creating flat & crisp UIs.

    • For example, the above mock-up design has numerous advantages for both a user and also as a programmer. There are seek bars as well as text boxes to input the values along with the feature of displaying the value that actually gets implemented and it’s much simpler to use. From the developer’s perspective, presence of seven identical views allows code reuse. A simple layout can be designed for one functionality and since all of them are identical, the layout can be reused in a Recyclerview.
    • The above design is a portion of the Control UI which displays the functionalities for  using PSLab as a function generator and as a voltage/current source.

    • The other section of the UI is of the Read portion. This has the functionalities to measure various parameters like voltage, resistance, capacitance, frequency and counting pulses. Here, drop-down boxes have been provided at places where channel selection is required. Since voltages are most commonly measured values in any experiment, voltages of all the channels have been displayed simultaneously.
    • Attempts should always be made to keep the smaller views as identical as possible since it becomes easier for the developer to implement it and also for the user to understand.

 

The Control UI has an Advanced Section which has features like Waveform Generators allows to generate sine/square waves of a given frequency & phase, Configuring Pulse Width Modulation (PWM)  and selecting the Digital output channel. Since, the use of such features are limited to higher level experiments, they have been separately placed in the Advanced section.

Even here drop-down boxes, text boxes & check boxes have been used to make UI look interactive.

The common dilemma faced while writing the XML file is regarding the view type to be chosen as Android provides a lot of them like LinearLayout, ConstraintLayout, ScrollView, RecyclerView, ListView etc. So, although there are several possible ways of designing a view. Certain things like using ListView or RecyclerView where there is repetition of elements is easier and when the elements are quite distinct from each other, it is better to stick to LinearLayout and ConstraintLayout.