Implementation of Text-To-Speech Feature In Susper

Susper has been given a voice search feature through which it provides the user a better experience of search. We introduced to enhance the speech recognition by adding Speech Synthesis or Text-To-Speech feature. The speech synthesis feature should only work when a voice search is attempted.

The idea was to create speech synthesis similar to market leader. Here is the link to YouTube video showing the demo of the feature: Video link

In the video, it will show demo :

  • If a manual search is used then the feature should not work.
  • If voice search is used then the feature should work.

For implementing this feature, we used Speech Synthesis API which is provided with Google Chrome browser 33 and above versions.

window.speechSynthesis.speak(‘Hello world!’); can be used to check whether the browser supports this feature or not.

First, we created an interface:

interface IWindow extends Window {
  SpeechSynthesisUtterance: any;
  speechSynthesis: any;
};

 

Then under @Injectable we created a class for the SpeechSynthesisService.

export class SpeechSynthesisService {
  utterence: any;  constructor(private zone: NgZone) { }  speak(text: string): void {
  const { SpeechSynthesisUtterance }: IWindow = <IWindow>window;
  const { speechSynthesis }: IWindow = <IWindow>window;  this.utterence = new SpeechSynthesisUtterance();
  this.utterence.text = text; // utters text
  this.utterence.lang = ‘en-US’; // default language
  this.utterence.volume = 1; // it can be set between 0 and 1
  this.utterence.rate = 1; // it can be set between 0 and 1
  this.utterence.pitch = 1; // it can be set between 0 and 1  (window as any).speechSynthesis.speak(this.utterence);
}// to pause the queue of utterence
pause(): void {
  const { speechSynthesis }: IWindow = <IWindow>window;
const { SpeechSynthesisUtterance }: IWindow = <IWindow>window;
  this.utterence = new SpeechSynthesisUtterance();
  (window as any).speechSynthesis.pause();
 }
}

 

The above code will implement the feature Text-To-Speech.

The source code for the implementation can be found here: https://github.com/fossasia/susper.com/blob/master/src/app/speech-synthesis.service.ts

We call speech synthesis only when voice search mode is activated. Here we used redux to check whether the mode is ‘speech’ or not. When the mode is ‘speech’ then it should utter the description inside the infobox.

We did the following changes in infobox.component.ts:

import { SpeechSynthesisService } from ../speechsynthesis.service;

speechMode: any;

constructor(private synthesis: SpeechSynthesisService) { }

this.query$ = store.select(fromRoot.getwholequery);
this.query$.subscribe(query => {
  this.keyword = query.query;
  this.speechMode = query.mode;
});

 

And we added a conditional statement to check whether mode is ‘speech’ or not.

// conditional statement
// to check if mode is ‘speech’ or not
if (this.speechMode === speech) {
  this.startSpeaking(this.results[0].description);
}
startSpeaking(description) {
  this.synthesis.speak(description);
  this.synthesis.pause();
}

 

 

 

Continue ReadingImplementation of Text-To-Speech Feature In Susper

Checking Whether Migrations Are Up To Date With The Sqlalchemy Models In The Open Event Server

In the Open Event Server, in the pull requests, if there is some change in the sqlalchemy model, sometimes proper migrations for the same are missed in the PR.

The first approach to check whether the migrations were up to date in the database was with the following health check function:

from subprocess import check_output
def health_check_migrations():
   """
   Checks whether database is up to date with migrations, assumes there is a single migration head
   :return:
   """
   head = check_output(["python", "manage.py", "db", "heads"]).split(" ")[0]
   
   if head == version_num:
       return True, 'database up to date with migrations'
   return False, 'database out of date with migrations'

 

In the above function, we get the head according to the migration files as following:

head = check_output(["python", "manage.py", "db", "heads"]).split(" ")[0]


The table alembic_version contains the latest alembic revision to which the database was actually upgraded. We can get this revision from the following line:

version_num = (db.session.execute('SELECT version_num from alembic_version').fetchone())['version_num']

 

Then we compare both of the given heads and return a proper tuple based on the comparison output.While this method was pretty fast, there was a drawback in this approach. If the user forgets to generate the migration files for the the changes done in the sqlalchemy model, this approach will fail to raise a failure status in the health check.

To overcome this drawback, all the sqlalchemy models were fetched automatically and simple sqlalchemy select queries were made to check whether the migrations were up to date.

Remember that a raw SQL query will not serve our purpose in this case as you’d have to specify the columns explicitly in the query. But in the case of a sqlalchemy query, it generates a SQL query based on the fields defined in the db model, so if migrations are missing to incorporate the said change proper error will be raised.

We can accomplish this from the following function:

def health_check_migrations():
   """
   Checks whether database is up to date with migrations by performing a select query on each model
   :return:
   """
   # Get all the models in the db, all models should have a explicit __tablename__
   classes, models, table_names = [], [], []
   # noinspection PyProtectedMember
   for class_ in db.Model._decl_class_registry.values():
       try:
           table_names.append(class_.__tablename__)
           classes.append(class_)
       except:
           pass
   for table in db.metadata.tables.items():
       if table[0] in table_names:
           models.append(classes[table_names.index(table[0])])

   for model in models:
       try:
           db.session.query(model).first()
       except:
           return False, '{} model out of date with migrations'.format(model)
   return True, 'database up to date with migrations'

 

In the above code, we automatically get all the models and tables present in the database. Then for each model we try a simple SELECT query which returns the first row found. If there is any error in doing so, False, ‘{} model out of date with migrations’.format(model) is returned, so as to ensure a failure status in health checks.

Related:

Continue ReadingChecking Whether Migrations Are Up To Date With The Sqlalchemy Models In The Open Event Server

Displaying Multiple Markers on Map

Connfa and Open Event Android app provide the facility to see the locations on Google map by displaying the marker on places where the sessions will be taking place. As there are many sessions in a conference we need to display marker for each location. In this blog, we learn how to display multiple markers on a map using Google map API with the reference of Connfa app.
First, you need to set up the Google map API in your app. Find the detailed tutorial for that here where you can follow to get an API key and you’ll need a Gmail account for that. Then you can define a fragment like this in your XML layout where the map will appear.

<FrameLayout
   android:id="@+id/fragmentHolder"
   android:layout_width="match_parent"
   android:layout_height="match_parent"
   android:layout_below="@+id/layoutInfo"
   android:background="@android:color/white" />

Now you can define the in the fragment to make map in this FrameLayout named “fragmentHolder”,

private void replaceMapFragment() {
   CustomMapFragment mapFragment = CustomMapFragment.newInstance(LocationFragment.this);
   LocationFragment
           .this.getChildFragmentManager()
           .beginTransaction()
           .replace(R.id.fragmentHolder, mapFragment)
           .commitAllowingStateLoss();
}

We receive the data about locations in a List containing longitude, latitude, name and room number of the locations. Find the sample result in Open Event format. Here is how we fill in the locations,

for (int i = 0; i < locations.size(); i++) {
   Location location = locations.get(i);
   LatLng position = new LatLng(location.getLat(), location.getLon());
   mGoogleMap.addMarker(new MarkerOptions().position(position));

   if (i == 0) {
       CameraPosition camPos = new CameraPosition(position, ZOOM_LEVEL, TILT_LEVEL, BEARING_LEVEL);
       mGoogleMap.moveCamera(CameraUpdateFactory.newCameraPosition(camPos));
       fillTextViews(location);
   }
}

The CameraPostition defines the properties like Zoom level which can be adjusted by changing the parameters. You can define the like this,

private static final int ZOOM_LEVEL = 15;
private static final int TILT_LEVEL = 0;
private static final int BEARING_LEVEL = 0;

Find the complete implementation here. Here is the screenshot for the final result.

References:

  • Google Map APIs Documentation – https://developers.google.com/maps/documentation/android-api/map-with-marker
Continue ReadingDisplaying Multiple Markers on Map

Adding Description to the Susi AI Skills

Susi skill CMS is an editor to write and edit skill easily. It follows an API-centric approach where the Susi server acts as API server and a web front-end act as the client for the API and provides the user interface. A skill is a set of intents. One text file represents one skill, it may contain several intents which all belong together. All the skills are stored in Susi Skill Data repository and the schema is as following.

Using this, one can access any skill based on four tuples parameters model, group, language, skill. To know what a skill is about we needed to add a !description operator which identifies the text as a description for the skill. Let’s check out how to achieve it.Susi Skill class provides parser methods for the set of intents, given as text files.

 public static JSONObject readEzDSkill(BufferedReader br) throws JSONException {}
if (line.startsWith("!") && (thenpos = line.indexOf(':')) > 0) {
        String head = line.substring(1, thenpos).trim().toLowerCase();
       String tail = line.substring(thenpos + 1).trim();
if (head.equals("description")) {
   description =tail;
    }
}
 if (description.length() > 0) intent.put("description", description); 

The method readEzDSkill parses the skill txt file, it checks if a line starts with ‘!description’ (‘bang operator with description’) it then stores the content in string variable description.
If a description is found in a skill, it is recorded and put into Json Array of intents.

private final Map<String, Set<String>> skillDescriptions; 
 if (intent.getDescription() !=null) {
  Set<String> descriptions = this.skillDescriptions.get(intent.getSkill());
  if (descriptions == null) {
     descriptions = new LinkedHashSet<>();
     this.skillDescriptions.put(intent.getSkill(), descriptions);
   }
   descriptions.add(intent.getDescription());
}

SusiMind class  process this json and stores the description in a map of skill path and description. This map is used by DescriptionSkillService to list descriptions for all the skills given its model, group and language. For adding the description servlet we need to inherit the service class from AbstractAPIHandler and implement APIhandler interface.In Susi Server, an abstract class AbstractAPIHandler extending HttpServelets and implementing API handler interface is provided.

 @Override
    public BaseUserRole getMinimalBaseUserRole() { return BaseUserRole.ANONYMOUS; }

    @Override
    public JSONObject getDefaultPermissions(BaseUserRole baseUserRole) {
        return null;
    }

    @Override
    public String getAPIPath() {
        return "/cms/getDescriptionSkill.json";
    }

The getAPIPath() methods sets the API endpoint path, it gets appended to base path which is 127.0.0.1:4000/cms/getDescriptionSkill.json for local host. The getMinimalBaseRole method tells the minimum Userrole required to access this servlet it can also be ADMIN, USER. In our case it is Anonymous. A User need not log in to access this endpoint.
Next, we implement serviceimpl method which gives us the desired response in JSON format.

@Override
    public ServiceResponse serviceImpl(Query call, HttpServletResponse response, Authorization rights, final JsonObjectWithDefault permissions) {
        String model = call.get("model", "");
        String group = call.get("group", "");
        String language = call.get("language", "");
        JSONObject descriptions = new JSONObject(true);
            for (Map.Entry<String, Set<String>> entry : DAO.susi.getSkillDescriptions().entrySet()) {
                String path = entry.getKey();
  if ((model.length() == 0 || path.indexOf("/" + model + "/") > 0) &&(group.length() == 0 || path.indexOf("/" + group + "/") > 0) &&(language.length() == 0 || path.indexOf("/" + language + "/") > 0)) {
      descriptions.put(path, entry.getValue());
   }
            }
            JSONObject json = new JSONObject(true)
                    .put("model", model)
                    .put("group", group)
                    .put("language", language)
                    .put("descriptions", descriptions);
        return new ServiceResponse(json);
    }

We can get the required parameters through a call.get() method where the first parameter is the key for which we want to get the value and second parameter is the default value. If the path contains the desired language, group and model, we return it as a response otherwise an error message is displayed. To check the response go to http://api.susi.ai/cms/getDescriptionSkill.json?model=general&group=knowledge&language=en or http://127.0.0.1:4000/cms/getDescriptionSkill.json.

This is how getDescriptionSkill service works. To add a description to the skill visit susi_skill_data, the storage place for susi skills. For more information and complete code take a look at Susi server and join gitter chat channel for discussions.

Resources

Continue ReadingAdding Description to the Susi AI Skills

Supporting Dasherized Attributes and Query Params in flask-rest jsonapi for Open Event Server

In the Open Event API Server project attributes of the API are dasherized.

What was the need for dasherizing the attributes in the API ?

All the attributes in our database models are separated by underscores i.e first name would be stored as first_name. But most of the API client implementations support dasherized attributes by default. In order to attract third party client implementations in the future and making the API easy to set up for them was the primary reason behind this decision.Also to quote the official json-api spec recommendation for the same:

Member names SHOULD contain only the characters “a-z” (U+0061 to U+007A), “0-9” (U+0030 to U+0039), and the hyphen minus (U+002D HYPHEN-MINUS, “-“) as separator between multiple words.

Note: The dasherized version for first_name will be first-name.

flask-rest-jsonapi is the API framework used by the project. We were able to dasherize the API responses and requests by adding inflect=dasherize to each API schema, where dasherize is the following function:

def dasherize(text):
   return text.replace('_', '-')

 

flask-rest-jsonapi also provides powerful features like the following through query params:

But we observed that the query params were not being dasherized which rendered the above awesome features useless 🙁 . The reason for this was that flask-rest-jsonapi took the query params as-is and search for them in the API schema. As Python variable names cannot contain a dash, naming the attributes with a dash in the internal API schema was out of the question.

For adding dasherizing support to the query params, change in the QueryStringManager located at querystring.py of the framework root are required. A config variable named DASHERIZE_APIwas added to turn this feature on and off.

Following are the changes required for dasherizing query params:

For Sparse Fieldsets in the fields function, replace the following line:

result[key] = [value]
with
if current_app.config['DASHERIZE_API'] is True:
    result[key] = [value.replace('-', '_')]
else:
    result[key] = [value]

 

For sorting, in the sorting function, replace the following line:

field = sort_field.replace('-', '')

with

if current_app.config['DASHERIZE_API'] is True:
   field = sort_field[0].replace('-', '') + sort_field[1:].replace('-', '_')
else:
   field = sort_field[0].replace('-', '') + sort_field[1:]

 

For Include related objects, in include function, replace the following line:

return include_param.split(',') if include_param else []

with

if include_param:
   param_results = []
   for param in include_param.split(','):
       if current_app.config['DASHERIZE_API'] is True:
           param = param.replace('-', '_')
       param_results.append(param)
   return param_results
return []

Related links:

Continue ReadingSupporting Dasherized Attributes and Query Params in flask-rest jsonapi for Open Event Server

Running Dredd Hooks as a Flask App in the Open Event Server

The Open Event Server is based on the micro-framework Flask from its initial phases. After implementing API documentation, we decided to implement the Dredd testing in the Open Event API.

After isolating each request in Dredd testing, the real challenge is now to bind the database engine to the Dredd Hooks. And as we have been using Flask-SQLAlchemy db.Model Baseclass for building all the models and Flask, being a micro framework itself, came to our rescue as we could easily bind the database engine to the Flask app. Conventionally dredd hooks are written in pure Python, but we will running them as a self contained Flask app itself.

How to initialise this flask app in our dredd hooks. The Flask app can be initialised in the before_all hook easily as shown below:

def before_all(transaction):
    app = Flask(__name__)
    app.config.from_object('config.TestingConfig')

 

The database can be binded to the app as follows:

def before_all(transaction):
app = Flask(__name__)
app.config.from_object('config.TestingConfig')
db.init_app(app)
Migrate(app, db)

 

The challenge now is how to bind the application context when applying the database fixtures. In a normal Flask application this can be done as following:

with app.app_context():
#perform your operation

 

While for unit tests in python:

with app.test_request_context():
#perform tests

 

But as all the hooks are separate from each other, Dredd-hooks-python supports idea of a single stash list where you can store all the desired variables(a list or the name stash is not necessary).

The app and db can be added to stash as shown below:

@hooks.before_all
def before_all(transaction):
app = Flask(__name__)
app.config.from_object('config.TestingConfig')
db.init_app(app)
Migrate(app, db)
stash['app'] = app
stash['db'] = db

 

These variables stored in the stash can be used efficiently as below:

@hooks.before_each
def before_each(transaction):
with stash['app'].app_context():
db.engine.execute("drop schema if exists public cascade")
db.engine.execute("create schema public")
db.create_all()

 

and many other such examples.

Related Links:
1. Testing Your API Documentation With Dredd: https://matthewdaly.co.uk/blog/2016/08/08/testing-your-api-documentation-with-dredd/
2. Dredd Tutorial: https://help.apiary.io/api_101/dredd-tutorial/
3. Dredd Docs: http://dredd.readthedocs.io/

Continue ReadingRunning Dredd Hooks as a Flask App in the Open Event Server

Building Meilix in Travis using Heroku

Suppose you have to trigger (start) Travis but not through making a commit but through clicking a button on the webapp of the Meilix Generator. Through the webapp of Meilix Generator, we can pass the tag of the build which will be initiated and can also get the build link which is built by Travis.
Heroku is the place where we have deployed our webapp and through a button on the webapp that we used to start the build on the Travis. We have the access to give a tag to the build and with the help of this, we can even predict the URL of the build beforehand. So one can use it for its own personal project in a number of ways. And how I used this feature in Meilix Generator using Meilix script is described below:

How I used this idea

FOSSASIA meilix repository consists the script of a Linux Operating System based on Lubuntu. It uses Travis to build that script to result in a release of an iso file.

Now we thought an idea of building an autonomous system to start this build and get the release and in the meanwhile also make some required changes to the script to get it into the OS. We came up with an idea of a webapp which ask user its email id and tag of the build and till now a picture from the user which will be set as a wallpaper. It means the user would be able to config its distro according to its need through the graphical interface without a single line to code from the user end.

Through the webapp, a build button is taken as an input to go to a build page which triggers the Travis with the same user configuration to build the iso and deploy it on Github page. The user gets the link to the build on the next page only.

How I implemented this idea

Thanks to Travis API without which our idea is impossible to implement. We used a shell script to outframe our idea. The script takes the input of the user’s, repository, and branch to decide to where the trigger to take place.

There are two files one as travis_token as:

fossasia meilix master    # in the format of user repo branch

And script.sh as:  

#!/bin/bash
cat travis_tokens | while read line;     # this lines takes input of the user, repo and branch
do
    array=(${line})
    user="${array[0]}"
    project="${array[1]}"
    len=${#array[@]}
    for ((i=2; i<len; i++)); do
        branch="${array[i]}"            # supplied each value as variable
        body="{\"request\":{
            \"branch\":\"${branch}\",
            \"config\":{
                \"env\":{
                    \"email\":\"${email}\",    # supplied email and travis tag as environment variable
                    \"TRAVIS_TAG\":\"${TRAVIS_TAG}\"
                }
            }
    }}"
    echo "This Link Will be ready in approx 20 minutes"
    echo "https://github.com/fossasia/meilix/releases/download/${TRAVIS_TAG}/meilix-zesty-`date +%Y%m%d`-i386.iso"                  # a pre-predication of the link, we provide tag from user and date from system.
        curl -s -X POST \           # sending an API POST request to Travis to trigger the build of most recent commit 
            -H "Content-Type: application/json" \
            -H "Accept: application/json" \
            -H "Travis-API-Version: 3" \
            -H "Authorization: token ${KEY}" \     # this is stored in Heroku as KEY as environment variable and supplied from there only
            -d "${body}" \
            "https://api.travis-ci.org/repo/${user}%2F${project}/requests"  #%2 is used to interpret user and repo name as a single URL segment.
    done
done

After the trigger, you will get email which consists of a downloadable link to the iso.

How can this idea be helpful to a developer

There are lots of ways a developer can use this idea out. If a developer wants their user to automatically trigger the build and get the release build directly.

One can use it to set even the commit message through the shell script and customizing build configuration like replace, merge or deep_merge a configuration with the original .travis.yml file present in source repo.

Useful repositories and link which uses this:

Know more about Travis API v3:
Triggering the build
API blog

Have a look at our webapp and generate your own iso:
https://melix-generator.herokuapp.com/

Source code here:
https://github.com/fossasia/meilix-generator
https://github.com/fossasia/meilix

Continue ReadingBuilding Meilix in Travis using Heroku

Skill History Component in Susi Skill CMS

SUSI Skill CMS is an editor to write and edit skill easily. It is built on ReactJS framework and follows an API centric approach where the Susi server acts as API server. Using Skill CMS we can browse history of a skill, where we get commit ID, commit message and name the author who made the changes to that skills. In this blog post, we will see how to add skill revision history component in Susi Skill CMS.

One text file represents one skill, it may contain several intents which all belong together. Susi skills are stored in susi_skill_data repository. We can access any skill based on four tuples parameters model, group, language, skill.

<Menu.Item key="BrowseRevision">
 <Icon type="fork" />
   Browse Skills Revision
 <Link to="/browseHistory"></Link>
</Menu.Item>

First let’s create a option in sidebar menu, and link it “/browseHistory” route created at index.js 

 <Route path="/browseHistory" component={BrowseHistory} />

Next we will be adding skill versioning using endpoints provided by Susi Server, to select a skill we will create a drop down list, for this we will be using Select Field a component of  Material UI.

request('http://cors-anywhere.herokuapp.com/api.susi.ai/cms/getModel.json').then((data) => {
    console.log(data.data);
    data = data.data;
    for (let i = 0; i < data.length; i++) {
        models.push(<MenuItem value={i} key={data[i]} primaryText={`${data[i]}`}/>);
    }

    console.log(models);
});
 <SelectField
   floatingLabelText="Expert"
   style={{width: '50px'}}
   value={this.state.value}
   onChange={this.handleChange}>
      {experts}
  </SelectField>

We store the models, groups and languages in array using the endpoints api.susi.ai/cms/getModel.json, api.susi.ai/cms/getGroups.json, api.susi.ai/cms/getAllLanguages.json set the values in respective select fields. The request functions takes the url as string and the parses the json and fetches the object containing data or error depending on the response from the server. Once run your project using

npm start

And you would be able to see the drop down list working

Next, we will use Material UI tables for displaying the organized data. For using  table component we need to import table, it’s body, header and row column from Material ui class. 

import {Table, TableBody, TableHeader, TableHeaderColumn, TableRow, TableRowColumn} from "material-ui/Table";

We then make our header Columns, in our case it’s three, namely Commit ID, Commit Message, Author name.

 <TableRow>
  <TableHeaderColumn tooltip="Commit ID">Commit ID</TableHeaderColumn>
  <TableHeaderColumn tooltip="Commit Message">Commit Message</TableHeaderColumn>
  <TableHeaderColumn tooltip="Author Name">Author Name</TableHeaderColumn>
 </TableRow>

To get the history of modification of a skill, we will use endpoint “http://api.susi.ai/cms/getSkillHistory.json”. It uses JGit for managing version control in skill data repository. JGit is a library which implements the Git functionality in Java. An endpoint is accessed based on userRoles, which can be Admin, Privilege, User, Anonymous. In our case it is Anonymous. Thus a User need not to log in to access this endpoint.

 url = "http://api.susi.ai/cms/getSkillHistory.json?model="+models[this.state.modelValue].key+"&group="+groups[this.state.groupValue].key+"&language="+languages[this.state.languageValue].key+"&skill="+this.state.expertValue;

After getting the url, we will next make a ajax network call to get the modification history of skill., if the method returns success, we get the desired data in table array and display it in rows through its render() method, which checks if the data is set in state -if so, it renders the contents otherwise we display the error occurred while processing the request.

 $.ajax({
            url: url,
            jsonpCallback: 'pccd',
            dataType: 'jsonp',
            jsonp: 'callback',
            crossDomain: true,
            success: function(data) {
                data = data.commits;
                let array = [];
                for(let i=0;i<data.length;i++){
                    array.push(data[i]);
                }
{tableData.map((row, index) => (
  <TableRow key={index}>
    <TableRowColumn>{row.commitID}</TableRowColumn>
    <TableRowColumn>{row.commit_message}</TableRowColumn>
    <TableRowColumn>{row.author}</TableRowColumn>
   </TableRow>
))}

Test the final output  on http://skills.susi.ai/browseHistory or http://localhost:3000/browseHistory , select the model, group , language and skill and get the history of that skill.


Next time when you need drop down list or tables to organize your data, do check out https://github.com/callemall/material-ui for examples, which can help in providing good outlines to you apps. For contributions to susi_skill_cms, join our chat channel  on gitter: https://gitter.im/fossasia/susi_server and browse https://github.com/fossasia/susi_skill_cms for complete code.

Resources

Continue ReadingSkill History Component in Susi Skill CMS

Adding Manual ISO Controls in Phimpme Android

The Phimpme Android application comes with a well-featured camera to take high resolution photographs. It features an auto mode in the camera as well as a manual mode for users who likes to customise the camera experience according to their own liking. It provides the users to select from the range of ISO values supported by their devices with a manual mode to enhance the images in case the auto mode fails on certain circumstances such as low lighting conditions.

In this tutorial, I will be discussing how we achieved this in Phimpme Android with some code snippets and screenshots.

To provide the users with an option to select from the range of ISO values, the first thing we need to do is scan the phone for all the supported values of ISO and store it in an arraylist to be used to display later on. This can be done by the snippet provided below:

String iso_values = parameters.get("iso-values");
if( iso_values == null ) {
 iso_values = parameters.get("iso-mode-values"); // Galaxy Nexus
 if( iso_values == null ) {
    iso_values = parameters.get("iso-speed-values"); // Micromax A101
    if( iso_values == null )
       iso_values = parameters.get("nv-picture-iso-values"); // LG dual P990

Every device supports a different set of keyword to provide the list of ISO values. Hence, we have tried to add every possible keywords to extract the values. Some of the keywords used above covers almost 90% of the android devices and gets the set of ISO values successfully.

For the devices which supports the ISO values but doesn’t provide the keyword to extract the ISO values, we can provide the standard list of ISO values manually using the code snippet provided below:

values.add("200");
values.add("400");
values.add("800");
values.add("1600");

After extracting the set of ISO values, we need to create a list to display to the user and upon selection of the particular ISO value as depicted in the Phimpme camera screenshot below

Now to set the selected ISO value, we first need to get the ISO key to set the ISO values as depicted in the code snippet provided below:

if( parameters.get(iso_key) == null ) {
 iso_key = "iso-speed"; // Micromax A101
 if( parameters.get(iso_key) == null ) {
    iso_key = "nv-picture-iso"; // LG dual P990
    if( parameters.get(iso_key) == null ) {
       if ( Build.MODEL.contains("Z00") )
          iso_key = "iso"; // Asus Zenfone 2 Z00A and Z008

Getting the key to set the ISO values is similar to getting the key to extract the ISO values from the device. The above listed ISO keys to set the values covers most of the devices.

Now after we have got the ISO key, we need to change the camera parameter to reflect the selected change.

parameters.set(iso_key, supported_values.selected_value);
setCameraParameters(parameters);

To get the full source code on how to set the ISO values manually, please refer to the Phimpme Android repository.

Resources

  1. Stackoverflow – Keywords to extract ISO values from the device: http://stackoverflow.com/questions/2978095/android-camera-api-iso-setting
  2. Open camera Android source code: https://sourceforge.net/p/opencamera/code/ci/master/tree/
  3. Blog – Learn more about ISO values in photography: https://photographylife.com/what-is-iso-in-photography
Continue ReadingAdding Manual ISO Controls in Phimpme Android

Visualising Tweet Statistics in MultiLinePlotter App for Loklak Apps

MultiLinePlotter app is now a part of Loklak apps site. This app can be used to compare aggregations of tweets containing a particular query word and visualise the data for better comparison. Recently there has been a new addition to the app. A feature for showing tweet statistics like the maximum number of tweets (along with date) containing the given query word and the average number of tweets over a period of time. Such statistics is visualised for all the query words for better comparison.

Related issue: https://github.com/fossasia/apps.loklak.org/issues/236

Obtaining Maximum number of tweets and average number of tweets

Before visualising the statistics we need to obtain them. For this we simply need to process the aggregations returned by the Loklak API. Let us start with maximum number of tweets containing the given keyword. What we actually require is what is the maximum number of tweets that were posted and contained the user given keyword and on which date the number was maximum. For this we can use a function which will iterate over all the aggregations and return the largest along with date.

$scope.getMaxTweetNumAndDate = function(aggregations) {
        var maxTweetDate = null;
        var maxTweetNum = -1;

        for (date in aggregations) {
            if (aggregations[date] > maxTweetNum) {
                maxTweetNum = aggregations[date];
                maxTweetDate = date;
            }
        }

        return {date: maxTweetDate, count: maxTweetNum};
    }

The above function maintains two variables, one for maximum number of tweets and another for date. We iterate over all the aggregations and for each aggregation we compare the number of tweets with the value stored in the maxTweetNum variable. If the current value is more than the value stored in that variable then we simply update it and keep track of the date. Finally we return an object containing both maximum number of tweets and the corresponding date.Next we need to obtain average number of tweets. We can do this by summing up all the tweet frequencies and dividing it by number of aggregations.

$scope.getAverageTweetNum = function(aggregations) {
        var avg = 0;
        var sum = 0;

        for (date in aggregations) {
            sum += aggregations[date];
        }

        return parseInt(sum / Object.keys(aggregations).length);
    }

The above function calculates average number of tweets in the way mentioned before the snippet.

Next for every tweet we need to store these values in a format which can easily be understood by morris.js. For this we use a list and store the statistics values for individual query words as objects and later pass it as a parameter to morris.

var maxStat = $scope.getMaxTweetNumAndDate(aggregations);
        var avg = $scope.getAverageTweetNum(aggregations);

        $scope.tweetStat.push({
            tweet: $scope.tweet,
            maxTweetCount: maxStat.count,
            maxTweetOn: maxStat.date,
            averageTweetsPerDay: avg,
            aggregationsLength: Object.keys(aggregations).length
        });

We maintain a list called tweetStat and the list contains objects which stores the query word and the corresponding values.

Apart from plotting these statistics, the app also displays the statistics when user clicks on an individual treat present in the search record section. For this we filter tweetStat list mentioned above and get the required object corresponding to the query word the user selected bind it to angular scope. Next we display it using HTML.

<div class="tweet-stat max-tweet">
                  <div class="stat-label"> <h4>Maximum number of tweets containing '{{modalHeading}}' :</h4></div>
                  <div class="stat-value"> <strong>{{selectedTweetStat.maxTweetCount}}</strong> tweets on
                    <strong>{{selectedTweetStat.maxTweetOn}}</strong>
                  </div>
                </div>

Finally we need to plot the statistics. For this we use a function called plotStatGraph dedicated only for plotting statistics graph. We pass the tweetStat list as a parameter to morris and configure all the other parameters.

$scope.plotStatGraph = function() {
        $scope.plotStat = new Morris.Bar({
            element: 'graph',
            data: $scope.tweetStat,
            xkey: 'tweet',
            ykeys: ['maxTweetCount', 'averageTweetsPerDay'],
            labels: ['Maximum no. of tweets : ', 'Average no. of tweets/day'],
            parseTime: false,
            hideHover: 'auto',
            resize: true,
            stacked: true,
            barSizeRatio: 0.40
        });
        $scope.graphLoading = false;
    }

But now we have two graphs. One for showing variations in aggregation and the other for showing statistics. How do we manage them? Somehow we need to show them in the same page as this is a single page app. Also we need to avoid vertical scrolling as it would degrade both UI and UX. So we need to implement a switching mechanism. The user should be able to switch between the two graph views as per their wish. How to achieve that? Well, for this we maintain a global variable which will keep track of the current plot type. If the current graph type is aggregations then we call the function to plot aggregations otherwise we call the above mentioned function to plot statistics.

$scope.plotData = function() {
        $(".plot-data").html("");
        if ($scope.currentGraphType === "aggregations") {
            $scope.plotAggregationGraph();
        } else {
            $scope.plotStatGraph();
        }
    }

Lastly we integrate this state variable (currentGraphType) with the UI so that users can easily toggle between graph views with just a click.

<div class="switch" ng-click="toggle()">
                <span ng-if="queryRecords.length !== 0" class="glyphicon glyphicon-stats"></span>
              </div>

Important resources

Continue ReadingVisualising Tweet Statistics in MultiLinePlotter App for Loklak Apps