Integrating Stock Sensors with PSLab Android App

A sensor is a digital device (almost all the time an integrated circuit) which can receive data from outer environment and produce an electric signal proportional to that. This signal will be then processed by a microcontroller or a processor to provide useful functionalities. A mobile device running Android operating system usually has a few sensors built into it. The main purpose of these sensors is to provide user with better experience such as rotating the screen as he moves the device or turn off the screen when he is making a call to prevent unwanted screen touch events. PSLab Android application is capable of processing inputs received by different sensors plugged into it using the PSLab device and produce useful results. Developers are currently planning on integrating the stock sensors with the PSLab device so that the application can be used without the PSLab device.

This blog is about how to initiate a stock sensor available in the Android device and get readings from it. Sensor API provided by Google developers is really helpful in achieving this task. The process is consist of several steps. It is also important to note the fact that there are devices that support only a few sensors while some devices will support a lot of sensors. There are few basic sensors that are available in every device such as

  • “Accelerometer” – Measures acceleration along X, Y and Z axis
  • “Gyroscope” – Measures device rotation along X, Y and Z axis
  • “Light Sensor” – Measures illumination in Lux
  • “Proximity Sensor” – Measures distance to an obstacle from sensor

The implementing steps are as follows;

  1. Check availability of sensors

First step is to invoke the SensorManager from Android system services. This class has a method to list all the available sensors in the device.

SensorManager sensorManager;
sensorManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE);
List<Sensor> sensors = sensorManager.getSensorList(Sensor.TYPE_ALL);

Once the list is populated, we can iterate through this to find out if the required sensors are available and obstruct displaying activities related to sensors that are not supported by the device.

for (Sensor sensor : sensors) {
   switch (sensor.getType()) {
       case Sensor.TYPE_ACCELEROMETER:
           break;
       case Sensor.TYPE_GYROSCOPE:
           break;
       ...
   }
}

  1. Read data from sensors

To read data sent from the sensor, one should implement the SensorEventListener interface. Under this interface, there are two method needs to be overridden.

public class StockSensors extends AppCompatActivity implements SensorEventListener {

    @Override
    public void onSensorChanged(SensorEvent sensorEvent) {

    }

    @Override
    public void onAccuracyChanged(Sensor sensor, int i) {

    }
}

Out of these two methods, onSensorChanged() method should be addressed. This method provides a parameter SensorEvent which supports a method call getType() which returns an integer value representing the type of sensor produced the event.

@Override
public void onSensorChanged(SensorEvent sensorEvent) {
   switch (sensorEvent.sensor.getType()) {
       case Sensor.TYPE_ACCELEROMETER:
           break;
       case Sensor.TYPE_GYROSCOPE:
           break;
       ...
   }
}

Each available sensor should be registered under the SensorEventListener to make them available in onSensorChanged() method. The following code block illustrates how to modify the previous code to register each sensor easily with the listener.

for (Sensor sensor : sensors) {
   switch (sensor.getType()) {
       case Sensor.TYPE_ACCELEROMETER:
           sensorManager.registerListener(this, sensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER), SensorManager.SENSOR_DELAY_UI);
           break;
       case Sensor.TYPE_GYROSCOPE:
           sensorManager.registerListener(this, sensorManager.getDefaultSensor(Sensor.TYPE_GYROSCOPE), SensorManager.SENSOR_DELAY_UI);
           break;
   }
}

Depending on the readings we can provide user with numerical data or graphical data using graphs plotted using MPAndroidChart in PSLab Android application.

The following images illustrate how a similar implementation is available in Science Journal application developed by Google.

Resources

Continue ReadingIntegrating Stock Sensors with PSLab Android App

Reset SUSI.AI User Password & Parameter extraction from Link

In this blog I will discuss how does Accounts handle the incoming request to reset the password. If a user forgets his/her password, They can use forgot password button on http://accounts.susi.ai and It’s implementation is quite straightforward. As soon as a user enter his/her e-mail id and hits RESET button, an ajax call is made which first checks if the user email id is registered with SUSI.AI or not. If not, a failure message is thrown to user. But if the user is found to be registered in the database, An email is sent to him/her which contains a 30 characters long token. On the server token is hashed against the user’s email id and a validity of 7 days is set to it.
Let us have a look at the Reset Password link one receives.

http://accounts.susi.ai/?token={30 characters long token}

On clicking this link, what it does is that user is redirected to http://accounts.susi.ai with token as a request parameter. At the client side, A search is made which evaluates whether the URL has a token parameter or not. This was the overview. Since, http://accounts.susi.ai is based on ReactJS framework, it is not easy alike the native php functionality, but much more logical and systematic. Let us now take a closer look at how this parameter is searched for, token extracted and validated.

As you can see http://accounts.susi.ai and http://accounts.susi.ai/?token={token}, Both redirect the user to the same URL. So the first task that needs to be accomplished is check if a parameter is passed in the URL or not.
First import addUrlProps and UrlQueryParamTypes from react-url-query package and PropTypes from prop-types package. These will be required in further steps.
Have a look at the code and then we will understand it’s working.

const urlPropsQueryConfig = {
  token: { type: UrlQueryParamTypes.string },
};

class Login extends Component {
	static propTypes = {
    // URL props are automatically decoded and passed in based on the config
    token: PropTypes.string,
    // change handlers are automatically generated when given a config.
    // By default they update that single query parameter and maintain existing
    // values in the other parameters.
    onChangeToken: PropTypes.func,
  }

  static defaultProps = {
    token: "null",
  }

Above in the first step, we have defined by what parameter should the Reset Password triggered. It means, if and only if the incoming parameter in the URL is token, we move to next step, otherwise normal http://accounts.susi.ai page should be rendered. Also we have defined the data type of the token parameter to be UrlQueryParamTypes.string.
PropTypes are attributes in ReactJS similar to tags in HTML. So again, we have defined the data type. onChangeToken is a custom attribute which is fired whenever token parameter is modified. To declare default values, we have used defaultProps function. If token is not passed in the URL, by default it makes it null. This is still not the last step. Till now we have not yet checked if token is there or not. This is done in the componentDidMount function. componentDidMount is a function which gets called before the client renders the page. In this function, we will check that if the value of token is not equal to null, then a value must have been passed to it. If the value of token is not equal to null, it extracts the token from the URL and redirects user to reset password application. Below is the code snippet for better understanding.

componentDidMount() {
		const {
      token
    } = this.props;
		console.log(token)
		if(token !== "null") {
			this.props.history.push('/resetpass/?token='+token);
		}
		if(cookies.get('loggedIn')) {
			this.props.history.push('/userhome', { showLogin: false });
		}
	}

With this the first step of the process has been achieved. Again in the second step, we extract the token from the URL in the similar above fashion followed by an ajax request to the server which will validate the token and sends response to client accordingly {token might be invalid or expired}. Success is encoded in a JSON object and sent to client. Error is thrown as an error only and client shows an error of “invalid token”. If it is a success, it sends email id of the user against which the token was hashed and the client displays it on the page. Now user has to enter new password and confirm it. Client also tests whether both the fields have same values or not. If they are same, a simple ajax call is made to server with new password and the email id and token. If no error is caused in between (like connection timeout, server might be down etc), client is sent a JSON response again with “accepted” flag = true and message as “Your password has been changed!”.

Additional Resources:

(npmjs – official documentation of create-react-apps)

(Fortnox developer’s blog post)

(Dave Ceddia’s blog post for new ReactJS developers)

Continue ReadingReset SUSI.AI User Password & Parameter extraction from Link

Download SUSI.AI Setting Files from Data Folder

In this blog, I will discuss how the DownloadDataSettings servlet hosted on SUSI server functions. This post also covers a step by step demonstration on how to use this feature if you have hosted your own custom SUSI server and have admin rights to it. Given below is the endpoint where the request to download a particular file has to be made.

/data/settings

For systematic functionality and workflow, Users with admin login, are given a special access. This allows them to download the settings files and go through them easily when needed. There are various files which have email ids of registered users (accounting.json), user roles associated to them (authorization.json), groups they are a part of (groups.json) etc. To list all the files in the folder, use the given below end point:

/aaa/listSettings.json

How does the above servlet works? Prior to that, let us see how to to get admin rights on your custom SUSI.AI server.
For admin login, it is required that you have access to files and folders on server. Signup with an account and browse to

/data/settings/authorization.json

Find the email id with which you signed up for admin login and change userRole to “admin”. For example,

{
	"email:test@test.com": {
		"permissions": {},
		"userRole": "user"
	}
}

If you have signed up with an email id “test@test.com” and want to give admin access to it, modify the userRole to “admin”. See below.

{
	"email:test@test.com": {
		"permissions": {},
		"userRole": "admin"
	}
}

Till now, server did not have any email id with admin login or user role equal to admin. Hence, this exercise is required only for the first admin. Later admins can use changeUserRole application and give/change/modify user roles for any of the users registered. By now you must have admin login session. Let’s see now how the download and file listing servlets work.
First, the server creates a path by locally referencing settings folder with the help of DAO.data_dir.getPath(). This will give a string path to the data directory containing all the data-settings files. Now the server just has to make a JSONArray and has to pass a String array to JSONArray’s constructor, which will eventually be containing the name of all the data/settings files. If the process is not successfull ,then, “accepted” = false will be sent as an error to the user. The base user role to access the servlet is ADMIN as only admins are allowed to download data/setting files,
The file name which you have to download has to be sent in a HTTP request as a get parameter. For example, if an admin has to download accounting.json to get the list of all the registered users, the request is to be made in the following way:

BASE_URL+/data/settings?file=file_name

*BASE_URL is the URL where the server is hosted. For standard server, use BASE_URL = http://api.susi.ai.

In the initial steps, Server generates a path to data/settings folder and finds the file, name of which it receives in the request. If no filename is specified in the API call, by default, the server sends accounting.json file.

File settings = new File(DAO.data_dir.getPath()+"/settings");
String filePath = settings.getPath(); 
String fileName = post.get("file","accounting"); 
filePath += "/"+fileName+".json";

Next, the server will extract the file and using ServletOutputStream class, it will generate properties for it and set appropriate context for it. This context will, in turn, fetch the mime type for the file generated. If the mime type is returned as null, by default, mime type for the file will be set to application/octet-stream. For more information on mime type, please look at the following link. A complete list of mime types is compiled and documented here.

response.setContentType(mimetype);
response.setContentLength((int)file.length());

In the above code snippet, mime type and length of the file being downloaded is set. Next, we set the headers for the download response and use filename for that.

response.setHeader("Content-Disposition", "attachment; filename=" + fileName +".json");

All the manual work is done by now. The only thing left is to open a buffer stream, size of which has been defined as a class variable.
Here we use a byte array of size 4096 elements and write the file to client’s default download storage.

private static final int BUFSIZE = 4096;
byte[] byteBuffer = new byte[BUFSIZE];
             DataInputStream in = new DataInputStream(new FileInputStream(file));
            while ((in != null) && ((length = in.read(byteBuffer)) != -1))
            {
                outStream.write(byteBuffer,0,length);
            }

            in.close();
            outStream.close();

All the above-mentioned steps are enclosed in a try-catch block, which catches an exception if any ,and logs it in the log file. This message is also sent to the client for appropriate user information along with the success or failure indication through a boolean flag. Do not forget to close the input and output buffers as it may lead to memory leaks and someone with proper knowledge of network and buffer stream would be able to steal any essential or secured data.

Additional Resources

Continue ReadingDownload SUSI.AI Setting Files from Data Folder

Password Recovery Link Generation in SUSI with JSON

In this blog, I will discuss how the SUSI server processes requests using JSON when a request for password recovery is made.. The blog post will also cover some parts of the client side implementation as well for better insight.
All the clients function in the same way. When you click on forget password button, client asks you that whether you want to recover password for your own custom server or the standard one. This choice of user defines the base URL where to make the forget password request. If you select custom server radio button, then you will be asked to enter the URL to your server, Otherwise standard base URL will be used. The API endpoint used will be

/aaa/recoverpassword.json

The client will make a POST request with “forgotemail” parameter containing the email id of the user making the request. Following the email id provided, server generates a client identity if and only if the email id is registered. If email id is not found in the database, server throws an exception with error code 422.

String usermail = call.get("forgotemail", null);
ClientCredential credential = new ClientCredential(ClientCredential.Type.passwd_login, usermail);
ClientIdentity identity = new ClientIdentity(ClientIdentity.Type.email, credential.getName());
if (!DAO.hasAuthentication(credential)) {
	throw new APIException(422, "email does not exist");
}

If the email id is found to be registered against a valid user in the database, call to a method is made which returns a random string of length passed in as a parameter. This returned random string acts as a token.
Below is the implementation of the createRandomString(int length) method.

public static String createRandomString(Integer length){
    	char[] chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789".toCharArray();
    	StringBuilder sb = new StringBuilder();
    	Random random = new Random();
    	for (int i = 0; i < length; i++) {
    	    char c = chars[random.nextInt(chars.length)];
    	    sb.append(c);
    	}
    	return sb.toString();
    }

This method is defined in AbsractAPIHandler class. The function createRandomString(int length) initialises an array with alphabet (both upper and lower cases) and integers 1 to 10. An object of StringBuilder is declared and initialised in the fol loop.
Next, the token generated is hashed against the user’s email id. Since we have used a token of length 30, there will be 30! (30 factorial) combinations and hence chances of two tokens to be same is ver very low (approximately Zero). Validity for token is set for 7 days (i.e. one week). After that the token will expire and to reset the password a new token will be needed.

String token = createRandomString(30);
		ClientCredential tokenkey = new ClientCredential(ClientCredential.Type.resetpass_token, token);
		Authentication resetauth = new Authentication(tokenkey, DAO.passwordreset);
		resetauth.setIdentity(identity);
		resetauth.setExpireTime(7 * 24 * 60 * 60);
		resetauth.put("one_time", true);

Everything is set by now. Only thing left is send a mail to the user. For that we call a method sendEmail() of EmailHandler class. This requires 3 parameters. User email id, subject for the email, and the body of the email. The body contains a verification link. To get this verification link, getVerificationMailContent(String token) is called and token generated in the previous step is sent to it as a parameter.

String verificationLink = DAO.getConfig("host.url", "http://127.0.0.1:9000") + "/apps/resetpass/index.html?token=" + token;

The above command gets the base URL for the server and appends the link to reset password app along with the token it received in the method call. Rest of the body is saved as a template in /conf//templates/reset-mail.txt file. Finally, if no exception was catched during the process, the message “Recovery email sent to your email ID. Please check” and accepted = true is encoded into JSON data and sent to the client. If some exceptions was encountered, The exception message and accepted = false is sent to client.
Now the client processes the JSON object and notifies the user appropriately.

Additional Resources

Continue ReadingPassword Recovery Link Generation in SUSI with JSON

Markdown responses from SUSI Server

Most of the times SUSI sends a plain text reply. But for some replies we can set the type of the query as markdown and format the output in computer or bot typed images. In this blog I will explain how to get images with markdown instead of large texts.

This servlet is a simple HttpServlet and do not require any types of user authentication or base user roles. So, instead of extending it from AbstractAPIHandler we extend a HttpServlet.

public class MarkdownServlet extends HttpServlet {

This method is fired when we send a GET request to the server. It accepts those parameters and send it to the “process(…)” method.

One major precaution in open source is to ensure no one takes advantages out of it. In the first steps, we ensure that a user is not trying to access the server very frequently. If the server find the request frequency high, it returns a 503 error to the user.

if (post.isDoS_blackout()) {response.sendError(503, "your request frequency is too high"); return;} // DoS protection
process(request, response, post);
}

 The process function is where all the processing is done. Here the text is extracted from the URL. All the parameters are sent in GET request and the “process(…)” functions parses the query. After we check all the parameters like color, padding, uppercase, text color and get them in our local variables.

http://api.susi.ai/vis/markdown.png?text=hello%20world%0Dhello%20universe&color_text=000000&color_background=ffffff&padding=3

Here we calculate the optimum image size. A perfect size has the format 2:1, that fits into the preview window. We should not allow that the left or right border is cut away. We also resize the image here if necessary. Different clients can request different sizes of images and we can process the optimum image size here.

int lineheight = 7;
int yoffset = 0;
int height = width / 2;
while (lineheight <= 12) {
height = linecount * lineheight + 2 * padding - 1;
if (width <= 2 * height) break;
yoffset = (width / 2 - height) / 2;
height = width / 2;
lineheight++;
}

Then we print our text to the image. This is also done using the RasterPlotter. Using all the parameters that we parsed above we create a new image and set the colors, linewidth, padding etc. Here we are making a matrix with and set all the parameters that we calculated above to our image.

RasterPlotter matrix = new RasterPlotter(width, height, drawmode, color_background);
matrix.setColor(color_text);
if (c == '\n' || (c == ' ' && column + nextspace - pos >= 80)) {
x = padding - 1;
y += lineheight;
column = 0;
hashcount = 0;
if (!isFormatted) {
matrix.setColor(color_text);
}
isBold = false;
isItalic = false;
continue;
}
}

After we have our image we print the SUSI branding. Susi branding is put at the bottom right of the image. It prints “MADE WITH HTTP://SUSI.AI” at the bottom right of the image.

PrintTool.print(matrix, matrix.getWidth() - 6, matrix.getHeight() - 6, 0, "MADE WITH HTTP://SUSI.AI", 1, false, 50);

At the end we write  the image and set the cross origin access headers. This header is very important when we are using different domains on different clients. If this is not provided, the query may give the error of “Cross Origin Access blocked”.

response.addHeader("Access-Control-Allow-Origin", "*");
RemoteAccess.writeImage(fileType, response, post, matrix);
post.finalize();
}
}

This servlet can be locally tested at:

http://localhost:4000/vis/markdown.png?text=hello%20world%0Dhello%20universe&color_text=000000&color_background=ffffff&padding=3

Or at SUSI.ai API Server http://api.susi.ai/vis/markdown.png?text=hello%20world%0Dhello%20universe&color_text=000000&color_background=ffffff&padding=

Outputs

References

Oracle ImageIO Docs: https://docs.oracle.com/javase/7/docs/api/javax/imageio/ImageIO.html

Markdown Tutorial: https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet

Java 2D Graphics: http://docs.oracle.com/javase/tutorial/2d/

Continue ReadingMarkdown responses from SUSI Server

Changing Dimensions of Search Box Dynamically in Susper

Earlier the Susper search box had a fixed dimension. When a user types in a query, the dimensions of the search box remained fixed. This approach resulted in several issues like:

  • Matching the dimensions of the search bar following the market leader.
  • When dimensions are dynamically changing, it should not disturb alignment w.r.t tabs in the results page.

What actually happens is, when a user enters a query, the search box quickly changes its dimensions when results appear. I will be discussing below how we achieved this goal.

On the home page, we created the dimensions of a search bar with 584 x 44 pixels.

On the results page, we created the dimensions of search bar 632 x 44 similar to market leader:

How we proceeded?

Susper is built on Angular v4.1.3. It automatically comes with a function ngOnInit() whenever a new component has been created. ngOnInit() is a part of life cycle hook in Angular 4 (in Angular 2 as well). The function is called up or initialized when the component is rendered completely. This was the key for changing dimensions of search bar dynamically as soon as a component is created.

What happens is when a user types a query on the homepage and hits enter then, results component is created. As soon as, it is created – ngOnInit() function is called.

The default dimensions of search bar have been provided as follows:

search-bar.component.css

#navgroup {
  height: 44px;
  width: 584px;
}
When the homepage loads up, dimensions by default are 584 x 44.

search-bar.component.ts

private navbarWidth: any;
ngOnInit() {
  this.navbarWidth = 632px;
}

search-bar.component.html

We used [style.width] attribute to change the dimensions dynamically. Add this attribute inside input element.

<input #input type=“text” name=“query” class=“form-control” id=“nav-input” (ngModelChange)=“onquery($event)” [(ngModel)]=“searchdata.query” autocomplete=“off” (keypress)=“onEnter($event)” [style.width]=“navbarWidth”>
As soon as results component is loaded, the dimensions of search bar change to 632 x 44. In this way, we change the dimensions of search bar dynamically as soon as results are loaded.

Resources

Continue ReadingChanging Dimensions of Search Box Dynamically in Susper

Controlling Camera Actions Using Voice Interaction in Phimpme Android

In this blog, I will explain how I implemented Google voice actions to control camera features on the Phimpme Android project. I will cover the following features I have implemented on the Phimpme project:

  • Opening the application using Google Voice command.
  • Switching between the cameras.
  • Clicking a Picture and saving it through voice command.

Opening application when the user gives a command to Google Now.                       When the user gives command “Take a selfie” or “Click a picture” to Google Now it directly opens Phimpme camera activity.

 First                                                                                                                                        We need to add an intent filter to the manifest file so that Google Now can  detect Phimpme camera activity

<activity
   android:name=".opencamera.Camera.CameraActivity"
   android:screenOrientation="portrait"
   android:theme="@style/Theme.AppCompat.NoActionBar">
   <intent-filter>
       <action android:name="android.media.action.IMAGE_CAPTURE"/>

       <category android:name="android.intent.category.DEFAULT"/>
       <category android:name="android.intent.category.VOICE"/>
   </intent-filter>
</activity>

category android:name=”android.intent.category.VOICE” is added to the IMAGE_CAPTURE intent filter for the Google Now to detect the camera activity. For the Google Now assistance to accept the command in the camera activity we need to add the following in the STILL_IMAGE_CAMERA intent filter in the camera activity.

<intent-filter>
   <action android:name="android.media.action.STILL_IMAGE_CAMERA"/>

   <category android:name="android.intent.category.DEFAULT"/>
   <category android:name="android.intent.category.VOICE"/>
</intent-filter>

So, now when the user says “OK Google” and “Take a Picture” the camera activity on Phimpme opens.

Integrating Google Voice assistance in Camera Activity

Second,                                                                                                                               After opening the camera application the Google Assistance should ask a question.

The cameraActivity in Phimpme can be opened in two ways, such as:

  • When opened from a different application.
  • When given the command to Goole Now assistance.

We need to check whether the camera activity is prompted from Google assistance or not to activate voice command. We will check it in onResume function.

@Override
public void onResume() {
if (CameraActivity.this.isVoiceInteraction()) {
     startVoiceTrigger();
  }
} 

If is.VoiceInteraction gives “true” then voice Assistance prompts.             Assistance to ask which camera to use

Third,                                                                                                                                 After the camera activity opens the Google assistance should ask which camera to use either front or back.

To take any voice input from the user, we to store the expected commands in VoiceInteractor.PickoptionRequest. This function listens to the command by the user. We need to add synonyms for the same command.

To choose the rear camera

VoiceInteractor.PickOptionRequest.Option rear = new VoiceInteractor.PickOptionRequest.Option(getResources().getString(R.string.camera_rear), 0);
rear.addSynonym(getResources().getString(R.string.rear));
rear.addSynonym(getResources().getString(R.string.back));
rear.addSynonym(getResources().getString(R.string.normal)); 

I added synonyms like the rear, normal and back.

To choose front camera

VoiceInteractor.PickOptionRequest.Option front = new VoiceInteractor.PickOptionRequest.Option(getResources().getString(R.string.camera_front), 1);
front.addSynonym(getResources().getString(R.string.front));
front.addSynonym(getResources().getString(R.string.selfie_camera));
front.addSynonym(getResources().getString(R.string.forward));

I added synonyms like the front, selfie camera and forward. 

For assistance to ask any question such as “Which camera to use we” I have used getVoiceinteractor and inflating VoiceInteractor.PickOptionRequest.option[] array with options front and rear.

CameraActivity.this.getVoiceInteractor()
     .submitRequest(new VoiceInteractor.PickOptionRequest(
           new VoiceInteractor.Prompt(getResources().getString(“Which camera would you like to use?”),
           new VoiceInteractor.PickOptionRequest.Option[]{front, rear},
           null) {

The google assistance waits for a response from the user for only a few seconds and it goes inactive. If the user gives any unexpected command the assistance will ask the question one more time.

Check if the user gives an expected command or not.

We will override OnOptionResult(boolean finished, Options[] selection, Bundle result) function.if  (finished && selections.length == 1) if the speech length matches with any of the options provided it checks which option was used.

Check the command given by the user to switch between the cameras.

Two array objects are passed 0 and 1.  If the command given was “rear” then selection[0].getindex() = 0 and camera activity switches to the rear camera and if the the command given by the user is rear then selection[0].getIndex = 1 and camera activity switches to front camera.

@Override
public void onPickOptionResult(boolean finished, Option[] selections, Bundle result) {
  if (finished && selections.length == 1) {
     Message message = Message.obtain();
     message.obj = result;
     if (selections[0].getIndex() == 0)
     {  rearCamera();
        asktakePicture();
     }
     if (selections[0].getIndex() == 1)
     {
        asktakePicture();
     }
  }else{

       getActivity().finish();
  }

Click Picture when the user says “Cheese

After switching the camera the assistant prompts the message”Say cheese”. We need to add voiceInteractor.prompt(“Say cheese”).

We need to store the synonyms in VoiceInteractor.PickOption.Options options. I have added synonyms like ready, go, take it, OK, and Cheese to click a picture. If the user gives an unexpected output the assistance checks selection.length ==1 or not and prompts the message “Say cheese” again.

private void asktakePicture() {
  VoiceInteractor.PickOptionRequest.Option option = new VoiceInteractor.PickOptionRequest.Option(getResources().getString(R.string.cheese), 2);
  option.addSynonym(getResources().getString(R.string.ready));
  option.addSynonym(getResources().getString(R.string.go));
  option.addSynonym(getResources().getString(R.string.take));
  option.addSynonym(getResources().getString(R.string.ok));
getVoiceInteractor()
        .submitRequest(new VoiceInteractor.PickOptionRequest(
              new VoiceInteractor.Prompt(getResources().getString(R.string.say_cheese)),
              new VoiceInteractor.PickOptionRequest.Option[]{option},
              null) {
           @Override
           public void onPickOptionResult(boolean finished, Option[] selections, Bundle result) {
              if (finished && selections.length == 1) {
                 Message message = Message.obtain();
                 message.obj = result;
                 takePicture();
              } else {
                 getActivity().finish();
              }
           }
           @Override
           public void onCancel() {
              getActivity().finish();
           }
        });                                                                                                                                     

Conclusion

Now, Users can start camera activity on Phimpme through voice command “Take a Selfie”. They can switch between the cameras through voice command “Selfie camera” or “back camera”, “back” or “front” and finally click a picture by giving voice command “Cheese”, “Click it” and related synonyms.

Github

Resources

Continue ReadingControlling Camera Actions Using Voice Interaction in Phimpme Android

Aligning Images to Same Height Maintaining Ratio in Susper

In this blog, I’ll be sharing the idea how we have perfectly aligned images to the same height in Susper without disturbing their ratio. When it comes to aligning images perfectly, they should have:

  • Same height.
  • A proper ratio to maintain the image quality. Many developers apply same width and height without keeping in mind about image ratio which results in:
    • Blurred image,
    • Image with a lot of pixels,
    • Cropping of an image.

Earlier Susper was having image layout like this:

In the screenshot, images are not properly aligned.  They are also not having the same height. We wanted to improve the layout of images just like market leaders Google and DuckDuckGo.

  • How we implemented a better layout for images?

<div class=“container”>
  <div class=“grid” *ngIf=“Display(‘images’)”>
    <div class=“cell” *ngFor=“let item of item$ | async”>
      <a class=“image-pointer” href=“{{item.link}}”>
        <img class=“responsive-image” src=“{{item.link}}”></a>
    </div>
  </div>
</div>
I have created a container, in which images will be loaded from yacy server. Then I have created a grid with an equal number of rows and column. I have adjusted the height and width of rows and columns to obtain a grid which contains each division as a cell. The image will load inside the cell. Each cell consists of one image.
.grid {
  paddingleft: 80px;
}
.container {
  width: 100%;
  margin: 0;
  top: 0;
  bottom: 0;
  padding: 0;
}

After implementing it, we were facing issues like cropping of the image inside a cell. So, to avoid cropping and maintain the image ratio we introduced .responsive-image class which will avoid cropping of images inside cell.

.responsiveimage {
  maxwidth: 100%;
  height: 200px;
  paddingtop: 20px;
  padding: 0.6%;
  display: inlineblock;
  float: left;
}

This is how Susper’s image section looks now:

It took some time to align images, but somehow we succeeded in creating a perfect layout for the images.

We are facing some issues regarding images. Some of them don’t appear due to broken link. This issue will be resolved soon on the server.

Resources

Continue ReadingAligning Images to Same Height Maintaining Ratio in Susper

Showing RSS and Table Type Replies in SUSI Messenger Bots

All the messengers have a “plain text” reply support. To show web search (RSS) or table type replies, either:

  1. We need a “list type” (as in Facebook messenger) or “table type” reply support built in the messenger itself.
                                                                  or
  2. We need to convert the RSS or table type reply by SUSI API to plain text, so that we can send it, due to the “plain text” reply support available in almost every messenger.

The second point is the most favorable approach, as that way, replying with RSS or table type results is dependent only on the text support feature in the messenger. This way RSS or table type replies can be shown in messengers like REST API Gitter (which do not provide any other reply type support except text).

In SUSI web app, the UI of the web search and table type results:

As the SUSI web app is made in React js, it provided the app with necessary features to show the results this way. The messengers may not be having these required features.

So the task is we need to convert the RSS or table type replies by SUSI API to plain text to send it to the messenger.

Let’s work it out.

Converting RSS results to text:

First get familiar with the SUSI API reply to query “why” by visiting this url – http://api.susi.ai/susi/chat.json?q=why.

The JSON object returned will be constituting an array of objects as the value of the data key like:

"data": [
        {
        "title": "Why is Oracle male?",
        "description": "Why is Oracle male?. http dba oracle com why is male htm. Oracle Why is masculine?. ",
        "link": "http://dba-oracle.com/t_why_is_oracle_male.htm"
      }
]

If we notice carefully each object has 3 main keys namely “title”, “description” and “link”. So extracting these 3 properties from each object and binding them together into 1 string is the task we need to do.

So we traverse each object (i.e. rss result) in the data array and we keep on appending the values of “title”, “description” and “link” key values to the ans variable. At the end we send this resultant string to the messenger bot as a reply.  

Suppose we have the returned JSON object, in the data variable.

// storing the number of rss results
var metaCnt = data.answers[0].metadata.count;
    for(var i=0;i<metaCnt;i++){
        ans += ('Title : ');
ans += data.answers[0].data[i].title+', ';
        ans += ('Link : ');
        ans += data.answers[0].data[i].link+', ';
        ans += '\n\n';
    }

// send the message in ans variable to the messenger

Converting table type replies to text:

First get familiar with the SUSI API reply to query “why” by visiting this url – http://api.susi.ai/susi/chat.json?q=universities+in+australia.

The JSON object returned will be constituting an array of objects representing universities as the value of the data key in this form:

{
    "alpha_two_code": "AU",
    "name": "Australian Correspondence Schools",
    "country": "Australia",
    "web_page": "http://www.acs.edu.au/"
}

Here too, each object has 3 main keys namely “name”, “country” and “web_page”. So extracting these 3 properties from each object and binding them together into 1 string can make the things work.

Again traverse each object (i.e. university object) in the data array and we keep on appending the values of “name”, “country” and “web_page” key values to the ans variable. At the end we send this resultant string to the messenger bot as a reply.

Suppose we have the returned JSON object in the data variable.

    // the 3 main columns which we need are stored in colNames variable
    var colNames = data.answers[0].actions[0].columns;
    
    // storing the number of table entries
    var metaCnt = data.answers[0].metadata.count;
    for(var i=0;i<metaCnt;i++){
        for(var cN in colNames){
            // The column name
            ans += (colNames[cN]+' : ');
            // value for that column name
            ans += data.answers[0].data[i][cN]+', ';
        }
        ans += '\n\n';
    }

    // send the message in ans variable to the messenger

Resources

  1. By Slobodan Stojanović from smashing magazineDevelop a chat bot with node js.
  2. By Mikhail Larionov from Facebook blogsList templates and check box plugin
Continue ReadingShowing RSS and Table Type Replies in SUSI Messenger Bots

Generating the Mozilla All Hands Open Event Android App

The main aim of FOSSASIA Open Event Android App is to give an event organiser the ability to generate the app through a single click by providing the necessary json and binary files. The app was tested on the Mozilla All Hands 2017 event which is an annual conference held for Mozilla employees to showcase their experience and also interact with each other.  The sample files for the event can be found here. (https://github.com/fossasia/open-event/tree/master/sample/MozillaAllHands17). The data for the event was taken from here. (https://sanfranciscoallhandsjune2017.sched.com/).

What was required for building the sample for Mozilla All Hands 2017 event?

  • images folder containing the necessary images of speaker having support for local images too, the logo of the event, the background image of the event.
  • event json file which has all the event specific information like the name of the event, the schedule of the event, the description of the event etc.
  • forms json file having session and speaker form data.
  • meta json file having the root url of the event.
  • microlocations json file having all the locations where the sessions are going to happen.
  • session_types json file consisting data of all the type of session which will occur in the event (the length of session, the name of type and the id).
  • sessions json file consisting session specific data like the title of the session, start time and end time of session, which track that session belongs to etc.
  • speakers json file consisting of speaker specific data like the name of the speaker, image of the speaker, social links of the speaker etc.
  • sponsors json file consisting list of all sponsors of the event.
  • tracks json file consisting of tracks specific data.
  • config.json file which consists of the api url, app name.

After generating the required zip containing the above json files, the zip is uploaded to this site (http://droidgen.eventyay.com/) and we get an apk after filling the required information. This means all the event organiser is able to generate the sample app by providing the necessary information about the event.

The sample can be found here:

Folder Link: https://github.com/fossasia/open-event/tree/master/sample/MozillaAllHands17

Zip Link: https://github.com/fossasia/open-event/tree/master/sample/MozillaAllHands17.zip

How did the Mozilla All Hands 2017 sample app look like?

The screenshots of the sample apk can be seen below:

 

 

What were the issues found in the sample app?

As compared to the previous sample apk like Google IO Open Event Android App some issues like support for local speaker images, background issue of the logo were solved. However there were certain issues which we observed on testing the app with the Mozilla All Hands 2017 event:

  1. The theme of the app remains the same no matter which event it is. It is important to give the event organiser the ability to customise the theme of the app.
  2. Certain information in the app like the event information is hard-coded and needs to be taken from the assets folder instead of strings.xml.

Related Links

Continue ReadingGenerating the Mozilla All Hands Open Event Android App