Password Recovery Link Generation in SUSI with JSON

In this blog, I will discuss how the SUSI server processes requests using JSON when a request for password recovery is made.. The blog post will also cover some parts of the client side implementation as well for better insight.
All the clients function in the same way. When you click on forget password button, client asks you that whether you want to recover password for your own custom server or the standard one. This choice of user defines the base URL where to make the forget password request. If you select custom server radio button, then you will be asked to enter the URL to your server, Otherwise standard base URL will be used. The API endpoint used will be

/aaa/recoverpassword.json

The client will make a POST request with “forgotemail” parameter containing the email id of the user making the request. Following the email id provided, server generates a client identity if and only if the email id is registered. If email id is not found in the database, server throws an exception with error code 422.

String usermail = call.get("forgotemail", null);
ClientCredential credential = new ClientCredential(ClientCredential.Type.passwd_login, usermail);
ClientIdentity identity = new ClientIdentity(ClientIdentity.Type.email, credential.getName());
if (!DAO.hasAuthentication(credential)) {
	throw new APIException(422, "email does not exist");
}

If the email id is found to be registered against a valid user in the database, call to a method is made which returns a random string of length passed in as a parameter. This returned random string acts as a token.
Below is the implementation of the createRandomString(int length) method.

public static String createRandomString(Integer length){
    	char[] chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789".toCharArray();
    	StringBuilder sb = new StringBuilder();
    	Random random = new Random();
    	for (int i = 0; i < length; i++) {
    	    char c = chars[random.nextInt(chars.length)];
    	    sb.append(c);
    	}
    	return sb.toString();
    }

This method is defined in AbsractAPIHandler class. The function createRandomString(int length) initialises an array with alphabet (both upper and lower cases) and integers 1 to 10. An object of StringBuilder is declared and initialised in the fol loop.
Next, the token generated is hashed against the user’s email id. Since we have used a token of length 30, there will be 30! (30 factorial) combinations and hence chances of two tokens to be same is ver very low (approximately Zero). Validity for token is set for 7 days (i.e. one week). After that the token will expire and to reset the password a new token will be needed.

String token = createRandomString(30);
		ClientCredential tokenkey = new ClientCredential(ClientCredential.Type.resetpass_token, token);
		Authentication resetauth = new Authentication(tokenkey, DAO.passwordreset);
		resetauth.setIdentity(identity);
		resetauth.setExpireTime(7 * 24 * 60 * 60);
		resetauth.put("one_time", true);

Everything is set by now. Only thing left is send a mail to the user. For that we call a method sendEmail() of EmailHandler class. This requires 3 parameters. User email id, subject for the email, and the body of the email. The body contains a verification link. To get this verification link, getVerificationMailContent(String token) is called and token generated in the previous step is sent to it as a parameter.

String verificationLink = DAO.getConfig("host.url", "http://127.0.0.1:9000") + "/apps/resetpass/index.html?token=" + token;

The above command gets the base URL for the server and appends the link to reset password app along with the token it received in the method call. Rest of the body is saved as a template in /conf//templates/reset-mail.txt file. Finally, if no exception was catched during the process, the message “Recovery email sent to your email ID. Please check” and accepted = true is encoded into JSON data and sent to the client. If some exceptions was encountered, The exception message and accepted = false is sent to client.
Now the client processes the JSON object and notifies the user appropriately.

Additional Resources:

(StackOverflow- answered by Suresh Atta)

(example-code)

(javacodegeeks- example by Yogesh Mali)

Controlling Camera Actions Using Voice Interaction in Phimpme Android

In this blog, I will explain how I implemented Google voice actions to control camera features on the Phimpme Android project. I will cover the following features I have implemented on the Phimpme project:

  • Opening the application using Google Voice command.
  • Switching between the cameras.
  • Clicking a Picture and saving it through voice command.

Opening application when the user gives a command to Google Now.                       When the user gives command “Take a selfie” or “Click a picture” to Google Now it directly opens Phimpme camera activity.

 First                                                                                                                                        We need to add an intent filter to the manifest file so that Google Now can  detect Phimpme camera activity

<activity
   android:name=".opencamera.Camera.CameraActivity"
   android:screenOrientation="portrait"
   android:theme="@style/Theme.AppCompat.NoActionBar">
   <intent-filter>
       <action android:name="android.media.action.IMAGE_CAPTURE"/>

       <category android:name="android.intent.category.DEFAULT"/>
       <category android:name="android.intent.category.VOICE"/>
   </intent-filter>
</activity>

category android:name=”android.intent.category.VOICE” is added to the IMAGE_CAPTURE intent filter for the Google Now to detect the camera activity. For the Google Now assistance to accept the command in the camera activity we need to add the following in the STILL_IMAGE_CAMERA intent filter in the camera activity.

<intent-filter>
   <action android:name="android.media.action.STILL_IMAGE_CAMERA"/>

   <category android:name="android.intent.category.DEFAULT"/>
   <category android:name="android.intent.category.VOICE"/>
</intent-filter>

So, now when the user says “OK Google” and “Take a Picture” the camera activity on Phimpme opens.

Integrating Google Voice assistance in Camera Activity

Second,                                                                                                                               After opening the camera application the Google Assistance should ask a question.

The cameraActivity in Phimpme can be opened in two ways, such as:

  • When opened from a different application.
  • When given the command to Goole Now assistance.

We need to check whether the camera activity is prompted from Google assistance or not to activate voice command. We will check it in onResume function.

@Override
public void onResume() {
if (CameraActivity.this.isVoiceInteraction()) {
     startVoiceTrigger();
  }
} 

If is.VoiceInteraction gives “true” then voice Assistance prompts.             Assistance to ask which camera to use

Third,                                                                                                                                 After the camera activity opens the Google assistance should ask which camera to use either front or back.

To take any voice input from the user, we to store the expected commands in VoiceInteractor.PickoptionRequest. This function listens to the command by the user. We need to add synonyms for the same command.

To choose the rear camera

VoiceInteractor.PickOptionRequest.Option rear = new VoiceInteractor.PickOptionRequest.Option(getResources().getString(R.string.camera_rear), 0);
rear.addSynonym(getResources().getString(R.string.rear));
rear.addSynonym(getResources().getString(R.string.back));
rear.addSynonym(getResources().getString(R.string.normal)); 

I added synonyms like the rear, normal and back.

To choose front camera

VoiceInteractor.PickOptionRequest.Option front = new VoiceInteractor.PickOptionRequest.Option(getResources().getString(R.string.camera_front), 1);
front.addSynonym(getResources().getString(R.string.front));
front.addSynonym(getResources().getString(R.string.selfie_camera));
front.addSynonym(getResources().getString(R.string.forward));

I added synonyms like the front, selfie camera and forward. 

For assistance to ask any question such as “Which camera to use we” I have used getVoiceinteractor and inflating VoiceInteractor.PickOptionRequest.option[] array with options front and rear.

CameraActivity.this.getVoiceInteractor()
     .submitRequest(new VoiceInteractor.PickOptionRequest(
           new VoiceInteractor.Prompt(getResources().getString(“Which camera would you like to use?”),
           new VoiceInteractor.PickOptionRequest.Option[]{front, rear},
           null) {

The google assistance waits for a response from the user for only a few seconds and it goes inactive. If the user gives any unexpected command the assistance will ask the question one more time.

Check if the user gives an expected command or not.

We will override OnOptionResult(boolean finished, Options[] selection, Bundle result) function.if  (finished && selections.length == 1) if the speech length matches with any of the options provided it checks which option was used.

Check the command given by the user to switch between the cameras.

Two array objects are passed 0 and 1.  If the command given was “rear” then selection[0].getindex() = 0 and camera activity switches to the rear camera and if the the command given by the user is rear then selection[0].getIndex = 1 and camera activity switches to front camera.

@Override
public void onPickOptionResult(boolean finished, Option[] selections, Bundle result) {
  if (finished && selections.length == 1) {
     Message message = Message.obtain();
     message.obj = result;
     if (selections[0].getIndex() == 0)
     {  rearCamera();
        asktakePicture();
     }
     if (selections[0].getIndex() == 1)
     {
        asktakePicture();
     }
  }else{

       getActivity().finish();
  }

Click Picture when the user says “Cheese

After switching the camera the assistant prompts the message”Say cheese”. We need to add voiceInteractor.prompt(“Say cheese”).

We need to store the synonyms in VoiceInteractor.PickOption.Options options. I have added synonyms like ready, go, take it, OK, and Cheese to click a picture. If the user gives an unexpected output the assistance checks selection.length ==1 or not and prompts the message “Say cheese” again.

private void asktakePicture() {
  VoiceInteractor.PickOptionRequest.Option option = new VoiceInteractor.PickOptionRequest.Option(getResources().getString(R.string.cheese), 2);
  option.addSynonym(getResources().getString(R.string.ready));
  option.addSynonym(getResources().getString(R.string.go));
  option.addSynonym(getResources().getString(R.string.take));
  option.addSynonym(getResources().getString(R.string.ok));
getVoiceInteractor()
        .submitRequest(new VoiceInteractor.PickOptionRequest(
              new VoiceInteractor.Prompt(getResources().getString(R.string.say_cheese)),
              new VoiceInteractor.PickOptionRequest.Option[]{option},
              null) {
           @Override
           public void onPickOptionResult(boolean finished, Option[] selections, Bundle result) {
              if (finished && selections.length == 1) {
                 Message message = Message.obtain();
                 message.obj = result;
                 takePicture();
              } else {
                 getActivity().finish();
              }
           }
           @Override
           public void onCancel() {
              getActivity().finish();
           }
        });                                                                                                                                     

Conclusion

Now, Users can start camera activity on Phimpme through voice command “Take a Selfie”. They can switch between the cameras through voice command “Selfie camera” or “back camera”, “back” or “front” and finally click a picture by giving voice command “Cheese”, “Click it” and related synonyms.

Github

Resources

Aligning Images to Same Height Maintaining Ratio in Susper

In this blog, I’ll be sharing the idea how we have perfectly aligned images to the same height in Susper without disturbing their ratio. When it comes to aligning images perfectly, they should have:

  • Same height.
  • A proper ratio to maintain the image quality. Many developers apply same width and height without keeping in mind about image ratio which results in:
    • Blurred image,
    • Image with a lot of pixels,
    • Cropping of an image.

Earlier Susper was having image layout like this:

In the screenshot, images are not properly aligned.  They are also not having the same height. We wanted to improve the layout of images just like market leaders Google and DuckDuckGo.

  • How we implemented a better layout for images?

<div class=“container”>
  <div class=“grid” *ngIf=“Display(‘images’)”>
    <div class=“cell” *ngFor=“let item of item$ | async”>
      <a class=“image-pointer” href=“{{item.link}}”>
        <img class=“responsive-image” src=“{{item.link}}”></a>
    </div>
  </div>
</div>
I have created a container, in which images will be loaded from yacy server. Then I have created a grid with an equal number of rows and column. I have adjusted the height and width of rows and columns to obtain a grid which contains each division as a cell. The image will load inside the cell. Each cell consists of one image.
.grid {
  paddingleft: 80px;
}
.container {
  width: 100%;
  margin: 0;
  top: 0;
  bottom: 0;
  padding: 0;
}

After implementing it, we were facing issues like cropping of the image inside a cell. So, to avoid cropping and maintain the image ratio we introduced .responsive-image class which will avoid cropping of images inside cell.

.responsiveimage {
  maxwidth: 100%;
  height: 200px;
  paddingtop: 20px;
  padding: 0.6%;
  display: inlineblock;
  float: left;
}

This is how Susper’s image section looks now:

It took some time to align images, but somehow we succeeded in creating a perfect layout for the images.

We are facing some issues regarding images. Some of them don’t appear due to broken link. This issue will be resolved soon on the server.

Resources

Generating the Mozilla All Hands Open Event Android App

The main aim of FOSSASIA Open Event Android App is to give an event organiser the ability to generate the app through a single click by providing the necessary json and binary files. The app was tested on the Mozilla All Hands 2017 event which is an annual conference held for Mozilla employees to showcase their experience and also interact with each other.  The sample files for the event can be found here. (https://github.com/fossasia/open-event/tree/master/sample/MozillaAllHands17). The data for the event was taken from here. (https://sanfranciscoallhandsjune2017.sched.com/).

What was required for building the sample for Mozilla All Hands 2017 event?

  • images folder containing the necessary images of speaker having support for local images too, the logo of the event, the background image of the event.
  • event json file which has all the event specific information like the name of the event, the schedule of the event, the description of the event etc.
  • forms json file having session and speaker form data.
  • meta json file having the root url of the event.
  • microlocations json file having all the locations where the sessions are going to happen.
  • session_types json file consisting data of all the type of session which will occur in the event (the length of session, the name of type and the id).
  • sessions json file consisting session specific data like the title of the session, start time and end time of session, which track that session belongs to etc.
  • speakers json file consisting of speaker specific data like the name of the speaker, image of the speaker, social links of the speaker etc.
  • sponsors json file consisting list of all sponsors of the event.
  • tracks json file consisting of tracks specific data.
  • config.json file which consists of the api url, app name.

After generating the required zip containing the above json files, the zip is uploaded to this site (http://droidgen.eventyay.com/) and we get an apk after filling the required information. This means all the event organiser is able to generate the sample app by providing the necessary information about the event.

The sample can be found here:

Folder Link: https://github.com/fossasia/open-event/tree/master/sample/MozillaAllHands17

Zip Link: https://github.com/fossasia/open-event/tree/master/sample/MozillaAllHands17.zip

How did the Mozilla All Hands 2017 sample app look like?

The screenshots of the sample apk can be seen below:

 

 

What were the issues found in the sample app?

As compared to the previous sample apk like Google IO Open Event Android App some issues like support for local speaker images, background issue of the logo were solved. However there were certain issues which we observed on testing the app with the Mozilla All Hands 2017 event:

  1. The theme of the app remains the same no matter which event it is. It is important to give the event organiser the ability to customise the theme of the app.
  2. Certain information in the app like the event information is hard-coded and needs to be taken from the assets folder instead of strings.xml.

Related Links

Using Jackson Library in Open Event Android App for JSON Parsing

Jackson library is a popular library to map Java objects to JSON and vice-versa and has a better parsing speed as compared to other popular parsing libraries like GSON, JSONP etc. This blog post gives a basic explanation on how we utilised the Jackson library in the Open Event Android App.

To use the Jackson library we need to add the dependency for it in the app/build.gradle file.

compile 'com.squareup.retrofit2:converter-jackson:2.2.0'

After updating the app/build.gradle file with the code above we were able to use Jackson library in our project.

Example of Jackson Library JSON Parsing

To explain how the mapping is done let us take an example. The file given below is the sponsors.json file from the android app.

[
    “description”: “”,
    “id”: 1,
    “level”: 3,
    “name”:  KI Group,
    “type”: Gold,
    “url: “”,
    “logo-url”: “” 
]

The sponsors.json consists of mainly 7 attributes namely description, id, level, name, type, url and logo url for describing one sponsor of the event. The Sponsors.java file for converting the json response to Java POJO objects was done as follows utilizing the Jackson library:

public class Sponsors extends RealmObject {
    @JsonProperty(“id”)
    private int id;

    @JsonProperty(“type”)
    private String type;

    @JsonProperty(“description”)
    private String description;

    @JsonProperty(“level”)
    private String level;

    @JsonProperty(“name”)
    private String name;

    @JsonProperty(“url”)
    private String url;

    @JsonProperty(“logo-url”)
    private String logoUrl;
}

As we can see from the above code snippet, the JSON response is converted to Java POJO objects simply by using the annotation [email protected](“”)” which does the work for us.

Another example which makes this library amazing are the setter and getter annotations which help us use a single variable for two different json attributes if need be. We faced this situation when we were moving from the old api to the new json api. In that case we wanted support for both, old and new json attributes. In that case we simply used the following code snippet which made our transition to new api easier.

public class Sponsors extends RealmObject {
    private int id;
    private String type;
    private String description;
    private String level;
    private String name;
    private String url;
    private String logoUrl;

    @JsonSetter(“logo”)
    public void setLogo(String logoUrl) {
        this.logoUrl = logoUrl;
    }

    @JsonSetter(“logo-url”)
    public void setLogo(String logoUrl) {
        this.logoUrl = logoUrl;
    }
}

As we can see the setter annotations allow easy naming of variable to multiple attributes if need be thus making the code easily adaptable with less overload.

Related Links:

Using Flask SocketIO Library in the Apk Generator of the Open Event Android App

Recently Flask SocketIO library was used in the apk generator of the Open Event Android App as it gave access to the low latency bi-directional communications between the client and the server side. The client side of the apk generator was written in Javascript which helped us to use a SocketIO official client library to establish a permanent connection to the server.

The main purpose of using the library was to display logs to the user when the app generation process goes on. This gives the user an additional help to check what is the mistake in the file uploaded by them in case an error occurs. Here the library established a connection between the server and the client so that during the app generation process the server would send real time logs to the client so that they can be viewed by the user displayed in the frontend.

To use this library we first need to download it using pip command:

pip install flask-socketio

This library depends on the asynchronous services which can be selected amongst the following listed below:

  1. Eventlet
  2. Gevent
  3. Flask development server based on Werkzeug

Amongst the three listed, eventlet is the best performant option as it has support for long polling and WebSocket transports.

The next thing was importing this library in the flask application i.e the apk generator of the app as follows:

from flask_socketio import SocketIO

current_app = create_app()
socketio = SocketIO(current_app)

if __name__ == '__main__':
    socketio.run(current_app)

The main use of the above statement (socket.run(current_app)) is that with this the startup of the web server is encapsulated. When the application is run in debug mode it is preferred to use the Werkzeug development server. However in production mode the use of eventlet web server or gevent web server is recommended.

We wanted to show the status messages currently shown to the user in the form of logs. For this firstly the generator.py file was looked upon which had the status messages and these were sent to the client side of the generator by establishing a connection between them using this library. The code written on the server side to send messages to the client side was as follows:

def handle_message(message):
    if message not None:
        namespace = “/” + self.identifier;
        send(message, namespace = namespace)

As we can see from the above code the message had to be sent to that particular namespace. The main idea of using namespace was that if there were more than one users using the generator at the same time, it would mean the send() method would send the messages to all the clients connected which would lead to the mixing up of status messages and hence it was important to use namespace so that a message was sent to that particular client.

Once the server sent the messages to the client we needed to add functionality to the our client side to receive the messages from them and also update the logs area with suitable content received. For that first the socket connection was established once the generate button was clicked which generated the identifier for us which had to be used as the namespace for that process.

socket = io.connect(location.protocol + "//" + location.host + "/" + identifier, {reconnection: false});

This piece of code established the connection of the socket and helps the client side to receive and send messages from and to the server end.

Next thing was receiving messages from the server. For this the following code snippet was added:

socket.on('message', function(message) {
    $buildLog.append(message);
});

This piece of code receives the messages from the server for unnamed events. Once the connection is established and the messages are received, the logs are updated with each message being appended so as to show the user the real time information about their build status of their app.

This was how the idea of logs being shown in the apk generator was incorporated by making the required changes in the server and client side code using the Flask SocketIO library.

Related Links:

How to Implement Memory like Servlets in SUSI.AI

In this blog, I’ll be discussing about how a server gets previous messages from the Log files in SUSI server. SUSI AI clients, Android, iOS and web chat, follow a very simple rule. Whenever a user logs in to the app, the app makes a http GET call to the server in the background and in response, server returns the chat history.

Link to the API endpoint -> http://api.susi.ai/susi/memory.json

But parsing a lot of data might depend on the connection speed. If the connection is poor or lacking speed, the history would cost user’s time. To prevent this, server by default returns last 10 pair of messages. It is up to the client that how many messages they want to render. So for example, if the client requests last 5 messages, then the client has to make a GET request and pass the cognitions parameter. Hence the modified end point will be :

http://api.susi.ai/susi/memory.json?cognitions=2

But how does the server process it? Let us see.
Browse to susi_server/src/ai/susi/server/api/susi/UserService.java file. This is the main working servlet. If you are new and wondering how servlets for susi are implemented, Please go through this first how-to-add-a-new-servletapi-to-susi-server
This is how serviceImpl() method looks like :

@Override
    public ServiceResponse serviceImpl(Query post, HttpServletResponse response, Authorization user, final JsonObjectWithDefault permissions) throws APIException {

        int cognitionsCount = Math.min(10, post.get("cognitions", 10));
        String client = user.getIdentity().getClient();
        List<SusiCognition> cognitions = DAO.susi.getMemories().getCognitions(client);
        JSONArray coga = new JSONArray();
        for (SusiCognition cognition: cognitions) {
            coga.put(cognition.getJSON());
            if (--cognitionsCount <= 0) break;
        }
        JSONObject json = new JSONObject(true);
        json.put("cognitions", coga);
        return new ServiceResponse(json);
    }

In the first step, we find the minimum of default value (i.e. 10) and the value of cognitions received as GET parameter. Messages equivalent to minimum variable are encoded in JSONArray and sent to the client.

Whenever the server receives a valid signup request, It makes a directory with the name “email_emailid”. In this directory, a log.txt file is maintained which stores all the queries along with the other details associated with it. For example if user has signed up with the email id [email protected], Then the path of this directory will be [email protected]. If a user queries “http://api.susi.ai/susi/chat.json?timezoneOffset=-330&q=flip+a+coin”,  then

{
	"query": "flip a coin",
	"count": 1,
	"client_id": "",
	"query_date": "2017-06-30T12:22:05.918Z",
	"answers": [{
		"data": [{
			"0": "flip a coin",
			"token_original": "coin",
			"token_canonical": "coin",
			"token_categorized": "coin",
			"timezoneOffset": "-330",
			"answer": "tails",

			"skill": "/susi_skill_data/models/general/entertainment/en/flip_coin.txt",
			"_etherpad_dream": "cricket"
		},
		"metadata": {
			"count": 1
		},
		"actions": [{
			"type": "answer",
			"expression": "tails"
		}],
		"skills": ["/susi_skill_data/models/general/entertainment/en/flip_coin.txt"]
	}],
	"answer_date": "2017-06-30T12:22:05.928Z",
	"answer_time": 10,
	"language": "en"
}

The server has user’s identity. It will use this identity and store (will be appended) in the respective log file.

The next steps in retrieving the message are pretty easy and includes getting the identity of the current user session. Use this identity to populate the JSONArray named coga. This is finally encoded in a JSONObject along with other basic details and returned to the clients where they render the data received, and show the messages in an appropriate way.

Resources

Writing Tests for ISO8601Date.java class of the Open Event Android App

ISO8601Date.java class of Open Event Android App was a util class written to perform the date manipulation functions and ensure the code base got more simpler and deterministic. However it was equally important to test the result from this util class so as to ensure the result returned by it was what we wanted. A test class named “DateTest.java” was written to ensure all the edge cases of conversion of the dates string from one timezone to another timezone were handled properly.

For writing unit tests, first we needed to add these libraries as dependencies in the app’s top level build.gradle file as shown below:

dependencies {
  testCompile 'junit:junit:4.12'
}

Then a JUnit 4 Test class, which was a Java class containing the required test methods was created. The structure of the class looked like this:

public class DateTest {
[email protected]
   public void methodName() {
   }
}

Next step was including all the required methods which ensured the util class returned the correct results according to our needs. Various edge cases were taken into account by including functions like converting of date string from local time zone to specified timezone, from international timezone to local timezone, from local timezone to international timezone and many more. Some of the methods which were added in the class are shown below:

  • Test of conversion from local timezone to specified timezone:

This function aimed at ensuring the util class worked well with date conversion from local time zone to a specified timezone. An example as shown below was taken where the conversion of the date string was tested from UTC timezone to Singapore timezone.

@Test
public void shouldConvertLocalTimeZoneDateStringToSpecifiedTimeZoneDateString() {
    ISO8601Date.setTimeZone("UTC");
    ISO8601Date.setEventTimeZone("Asia/Singapore");

    String dateString1 = "2017-06-02T07:59:10Z";
    String actualString = ISO8601Date.getTimeZoneDateStringFromString(dateString1);
    String expectedString = "Thu, 01 Jun 2017, 23:59, UTC";
    Assert.assertEquals(expectedString, actualString);
}
  • Test of conversion from local timezone to international timezone:

This function aimed at ensuring the util class worked well with the date conversion from local timezone to international timezone. An example as shown below was taken where the conversion of the date string was tested from Amsterdam timezone to Singapore timezone.

@Test
public void shouldConvertLocalTimeZoneDateStringToInternationalTimeZoneDateString() {
    ISO8601Date.setTimeZone("Europe/Amsterdam");
    ISO8601Date.setEventTimeZone("Asia/Singapore");

    String dateString = "2017-06-02T02:29:10Z";
    String actualString = ISO8601Date.getTimeZoneDateStringFromString(dateString);
    String expectedString = "Thu, 01 Jun 2017, 20:29, GMT+02:00";
    Assert.assertEquals(expectedString, actualString);
}


Above were some functions added to ensure the conversion of a date string from one timezone to another was correct and thus ensured the util class was working properly and returned the results as required.

The last thing left was running the test to check the results the util class returned. For this we had to do two things:

  1. Sync the project with Gradle.
  2. Run the test by right clicking on the class and selecting “Run” option.

Through this we were able to run the test and check the output of the util class on different cases through the results which could be seen on the Android Monitor in the Android Studio.

Related Links:

  1. This link is about building effective unit tests in android. (https://developer.android.com/training/testing/unit-testing/index.html)
  2. This link is about the unit testing on date processing. (https://stackoverflow.com/questions/565289/unit-testing-code-that-does-date-processing-based-on-todays-date)

Expandable ListView In PSLab Android App

In the PSLab Android App, we show a list of experiments for the user to perform or refer to while performing an experiment, using PSLab hardware device. A long list of experiments need to be subdivided into topics like Electronics, Electrical, School Level, Physics, etc. In turn, each category like Electronics, Electrical, etc can have a sub-list of experiments like:

  • Electronics
    • Diode I-V characteristics
    • Zener I-V characteristics
    • Transistor related experiments
  • Electrical
    • Transients RLC
    • Bode Plots
    • Ohm’s Law

This list can continue in similar fashion for other categories as well. We had to  display  this experiment list to the users with a good UX, and ExpandableListView seemed the most appropriate option.

ExpandableListView is a two-level listView. In the Group view an individual item can be expanded to show it’s children. The Items associated with ExpandableListView come from ExpandableListAdapter.

 

 

 

 

 

 

 

 

Implementation of Experiments List Using ExpandableListView

First, the ExpandableListView was declared in the xml layout file inside some container like LinearLayout/RelativeLayout.

<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
   android:layout_width="match_parent"
   android:layout_height="match_parent"
   android:orientation="vertical">
   <ExpandableListView
       android:id="@+id/saved_experiments_elv"
       android:layout_width="match_parent"
       android:layout_height="wrap_content"
       android:divider="@color/colorPrimaryDark"
       android:dividerHeight="2dp" />
</LinearLayout>

Then we populated the data onto the ExpandableListView, by making an adapter for ExpandableListView by extending BaseExpandableListAdapter and implementing its methods. We then passed a Context, List<String> and Map<String,List<String>> to the Adapter constructor.

Context: for inflating the layout

List<String>: contains titles of unexpanded list

Map<String,List<String>>: contains sub-list mapped with title string

public SavedExperimentAdapter(Context context,
                                 List<String> experimentGroupHeader,
                                 HashMap<String, List<String>> experimentList) {
       this.context = context;
       this.experimentHeader = experimentGroupHeader;
       this.experimentList = experimentList;
   }

In getGroupView() method, we inflate, set title and return group view i.e the main list that we see on clicking and the  sub-list is expanded. You can define your own layout in xml and inflate it. For PSLab Android, we used the default one provided by Android

 android.R.layout.simple_expandable_list_item_2
@Override
public View getGroupView(int groupPosition, boolean isExpanded, View convertView, ViewGroup parent) {
   String headerTitle = (String) getGroup(groupPosition);
   if (convertView == null) {
       LayoutInflater inflater = (LayoutInflater) this.context.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
       convertView = inflater.inflate(android.R.layout.simple_expandable_list_item_2, null);
   }
   TextView tvExperimentListHeader = (TextView) convertView.findViewById(android.R.id.text1);
   tvExperimentListHeader.setTypeface(null, Typeface.BOLD);
   tvExperimentListHeader.setText(headerTitle);
   TextView tvTemp = (TextView) convertView.findViewById(android.R.id.text2);
   tvTemp.setText(experimentDescription.get(groupPosition));
   return convertView;
}

Similarly, in getChildView() method, we inflate, set data and return child view. We wanted simple TextView as sub-list item thus inflated the layout containing only TextView and setText by taking reference of textView from the inflated view.

@Override
public View getChildView(int groupPosition, int childPosition, boolean isLastChild, View convertView, ViewGroup parent) {
   String experimentName = (String) getChild(groupPosition, childPosition);
   if (convertView == null) {
       LayoutInflater inflater = (LayoutInflater) this.context.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
       convertView = inflater.inflate(R.layout.experiment_list_item, null);
   }
   TextView tvExperimentTitle = (TextView) convertView.findViewById(R.id.exp_list_item);
   tvExperimentTitle.setText(experimentName);
   return convertView;
}

The complete code for the Adapter can be seen here.

After creating the adapter we proceeded similarly to the normal ListView. Take the reference for ExpandableListView by findViewById() or BindView if you are using ButterKnife and set the adapter as an instance of adapter created above.

@BindView(R.id.saved_experiments_elv)
ExpandableListView experimentExpandableList;
experimentAdapter = new SavedExperimentAdapter(context, headerList, map);
experimentExpandableList.setAdapter(experimentAdapter);
Source: PSLab Android

Roadmap

We are planning to divide the experiment sub-list into categories like

  • Electronics
    • Diode
      • Diode I-V
      • Zener I-V
      • Diode Clamping
      • Diode Clipping
    • BJT and FET
      • Transistor CB (Common Base)
      • Transistor CE (Common Emitter)
      • Transistor Amplifier
      • N-FET output characteristic
    • Op-Amps
  • Electrical

This is a bit more complex than it looks, I tried using an ExpandableListView as a child for a group item but ran into some errors. I will write a post as soon as this view hierarchy has been achieved.

Resources

Share Images on Pinterest from Phimpme Android Application

After successfully establishing Pinterest authentication in Phimpme our next goal was to share the image on the Pinterest website directly from Phimpme, without using any native Android application.

Adding Pinterest Sharing option in Sharing Activity in Phimpme

To add various sharing options in Sharing Activity in the Phimpme project, I have applied a ScrollView for the list of the different sharing options which include: Facebook, Twitter, Pinterest, Imgur, Flickr and Instagram. All the App icons with the name are arranged in a TableLayout in the activity_share.xml file. Table rows consist of two columns. In this way, it is easier to add more app icons for future development.

<ScrollView
   android:layout_width="wrap_content"
   android:layout_height="@dimen/scroll_view_height"
   android:layout_above="@+id/share_done"
   android:id="@+id/bottom_view">
   <LinearLayout
       android:layout_width="wrap_content"
       android:layout_height="wrap_content"
       android:orientation="vertical">
       <TableLayout

Adding Pinterest app icon on the icons_drawable array. This array is then used to inflate the icon on the list view.

private int[] icons_drawables = {R.drawable.ic_facebook_black, R.drawable.ic_twitter_black,R.drawable.ic_instagram_black, R.drawable.ic_wordpress_black, R.drawable.ic_pinterest_black);

Adding Pinterest text on the titles_text array. This array is then used to inflate the names of the various sharing activity.

private int[] titles_text = {R.string.facebook, R.string.twitter, R.string.instagram,
       R.string.wordpress, R.string.pinterest);

Prerequisites to share Image on Pinterest

To share an Image on Pinterest a user has to add a caption and Board ID. Our first milestone was to get the input of the Board ID  by the user. I have achieved this by taking the input in a Dialog Box. When the user clicks on the Pinterest option, a dialog box pops and then the user can add their Board ID.

private void openPinterestDialogBox() {
   AlertDialog.Builder captionDialogBuilder = new AlertDialog.Builder(SharingActivity.this, getDialogStyle());
   final EditText captionEditText = getCaptionDialog(this, captionDialogBuilder);

   captionEditText.setHint(R.string.hint_boardID);

   captionDialogBuilder.setNegativeButton(getString(R.string.cancel).toUpperCase(), null);
   captionDialogBuilder.setPositiveButton(getString(R.string.post_action).toUpperCase(), new DialogInterface.OnClickListener() {
       @Override
       public void onClick(DialogInterface dialog, int which) {
           //This should be empty it will be overwrite later
           //to avoid dismiss of the dialog on the wrong password
       }
   });

   final AlertDialog passwordDialog = captionDialogBuilder.create();
   passwordDialog.show();

   passwordDialog.getButton(AlertDialog.BUTTON_POSITIVE).setOnClickListener(new View.OnClickListener() {
       @Override
       public void onClick(View v) {
           String captionText = captionEditText.getText().toString();
           boardID =captionText;
           shareToPinterest(boardID);
           passwordDialog.dismiss();
       }
   });
}

A user can fetch the Board ID by following the steps:

Board ID is necessary because it specifies where the image needs to be posted.

Creating custom post function for Phimpme

The image is posted using a function in PDKClient class. PDKClient is found in the PDK module which we get after importing Pinterest SDK. Every image is posted on Pinterest is called a Pin. So we will call createPin function. I have made my custom createPin function so that it also accepts Bitmaps as a parameter. In the Pinterest SDK it only accepts image URL to share, The image should already be on the internet to be shared. For this reason, we to add a custom create Pin function to accept Bitmaps as an option.

public void createPin(String note, String boardId, Bitmap image, String link, PDKCallback callback) {
   if (Utils.isEmpty(note) || Utils.isEmpty(boardId) || image == null) {
       if (callback != null) callback.onFailure(new PDKException("Board Id, note, Image cannot be empty"));
       return;
   }

   HashMap<String, String> params = new HashMap<String, String>();
   params.put("board", boardId);
   params.put("note", note);
   params.put("image_base64", Utils.base64String(image));
   if (!Utils.isEmpty(link)) params.put("link", link);
   postPath(PINS, params, callback);
}

Compressing Bitmaps

Since Pinterest SDK cannot accept Bitmap I have added a function to compress the Bitmap and encode it to strings.

public static String base64String(Bitmap bitmap) {
   ByteArrayOutputStream baos = new ByteArrayOutputStream();
   bitmap.compress(Bitmap.CompressFormat.JPEG, 100, baos);
   String b64Str = Base64.encodeToString(baos.toByteArray(), Base64.NO_WRAP);
   return b64Str;
}

Calling createPin function from the sharingActivity

From the sharingActivity we will call createPin activity. We will pass caption of the image, Board ID, Image bitmap and link(which is optional) as parameters.

PDKClient
       .getInstance().createPin(caption, boardID, image, null, new PDKCallback() {

If the image is posted successfully then onSuccess function is called which pops a snackbar and shows the success message. Otherwise, onFailure function is called which displays failure message on a snackbar.

@Override
public void onSuccess(PDKResponse response) {
   Log.d(getClass().getName(), response.getData().toString());
   Snackbar.make(parent, R.string.pinterest_post, Snackbar.LENGTH_LONG).show();
   //Toast.makeText(SharingActivity.this,message,Toast.LENGTH_SHORT).show();

}

@Override
public void onFailure(PDKException exception) {
   Log.e(getClass().getName(), exception.getDetailMessage());
   Snackbar.make(parent, R.string.Pinterest_fail, Snackbar.LENGTH_LONG).show();
   //Toast.makeText(SharingActivity.this, boardID + caption, Toast.LENGTH_SHORT).show();

}

Conclusion

In Phimpme user can now send Image on Pinterest directly from the application. This is done without the use of the native Pinterest application.

Github

Resources