Sorting Photos in Phimpme Android

The Phimpme Android application features a fully fledged gallery interface with an option to switch to all photos mode, albums mode and to sort photos according to various sort actions. Sorting photos via various options helps the user to get to the desired photo immediately without having to scroll down till the end in case it is the last photo in the list generated automatically by scanning the phone for images using the Android’s mediaStore class. In this tutorial, I will be discussing how we achieved the sorting option in the Phimpme application with the help of some code snippets.

To sort the all photos list, first of all we need a list of all the photos by scanning the phone using the media scanner class via the code snippet provided below:

uri = android.provider.MediaStore.Images.Media.EXTERNAL_CONTENT_URI;
      String[] projection = {MediaStore.MediaColumns.DATA};
      cursor = activity.getContentResolver().query(uri, projection, null, null, null);

In the above code we are using a cursor to point to each photos and then we are extracting the path of the images and storing it in a list using a while loop. After we generate the list of path of all the images, we have to convert the into a list of media using the file path using the code below:

for (String path : listOfAllImages) {
          list.add(new Media(new File(path)));
      }
      return list;
  }

After generating the list of all images we can sort the photos using the Android’s collection class. In Phimpme Android we provide the option to sort photos in different categories namely:

  1. Name Sort action
  2. Date Sort action
  3. Size Sort action
  4. Numeric Sort action

As sorting is somewhat heavy task so doing this in the main thread will result in freezing UI of the application so we have to put this into an AsyncTask with a progress dialog to sort the photos. After putting the above four options in the menu options. We can define an Asynctask to load the images and in the onPreExecute method of the AsyncTask, we are displaying the progress dialog to the user to depict that the sorting process is going on as depicted in the code snippet below

AlertDialog.Builder progressDialog = new AlertDialog.Builder(LFMainActivity.this, getDialogStyle());
dialog = AlertDialogsHelper.getProgressDialog(LFMainActivity.this, progressDialog,
      getString(R.string.loading_numeric), all_photos ? getString(R.string.loading_numeric_all) : getAlbum().getName());
dialog.show();

In the doInBackgroundMethod of the AsyncTask, we are sorting the list of all photos using the Android’s collection class and using the static sort method defined in that class which takes the list of all the media files as a parameter and the MediaComparator which takes the sorting mode as the first parameter and the sorting order as the second. The sorting order decides whether to arrange the list in ascending or in descending order.

getAlbum().setDefaultSortingMode(getApplicationContext(), NUMERIC);
Collections.sort(listAll, MediaComparators.getComparator(getAlbum().settings.getSortingMode(), getAlbum().settings.getSortingOrder()));

After sorting, we have to update the data set to reflect the changes of the list in the UI. This we are doing in the onPostExecute method of the AsyncTask after dismissing the progress Dialog to avoid the window leaks in the application. You can read more about the window leaks in Android due to progressdialog here.

dialog.dismiss();
mediaAdapter.swapDataSet(listAll);

To get the full source code, you can refer the Phimpme Android repository listed in the resources below.

Resources

  1. Android developer guide to mediastore class: https://developer.android.com/reference/android/provider/MediaStore.html
  2. GitHub LeafPic repository: https://github.com/HoraApps/LeafPic
  3. Stackoverflow – Preventing window leaks in Android: https://stackoverflow.com/questions/6614692/progressdialog-how-to-prevent-leaked-window
  4. Blog – Sorting lists in Android: http://www.worldbestlearningcenter.com/tips/Android-sort-ListView.htm
Continue ReadingSorting Photos in Phimpme Android

Implementing Text-to-Speech (TTS) in SUSI Android

Mobile assistants are designed to perform tasks that the user “commands” through by chat UI or speech. The Android OS already provides Text to speech (TTS) and Speech to text (STT) features. This feature is available from Android version 1.6 onward. In this blog post I will show how tts is implemented in SUSI Android and how I fix the issue ‘delay in speech response’.

TextToSpeech class controls the tts engine. To use TextToSpeech class import it in the activity where you want to use text to speech feature.

import android.speech.tts.TextToSpeech;

After you import TextToSpeech class now we need to initialize TextToSpeech

TextToSpeech tts = new TextToSpeech(this,this);

Here first parameter is the Context and the other one is the listener. The listener is  use  to  inform our app that the engine is ready to use. In order to be notified we have to  implement  TextToSpeech.OnInitListener.

TextToSpeech.OnInitListener listener = new  TextToSpeech.OnInitListener {
@Override
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS)
tts.setLanguage(Locale.UK/* set the default language*/);
}
}

Hence the engine can be initialized asIf status is success then, it means that TTS is initialized successfully and now we can use it. Otherwise, we can’t use it. setLanguage method is used to set language in which we want reply.

TextToSpeech tts = new TextToSpeech(getApplicationContext,listener)

When you use TTS one thing you have to remember that TTS run  on main thread so sometimes it may cause delays in text to speech conversion or it may block UI for a while. It is better to wrap it like below code.

new Handler().post(new Runnable() {
      @Override
      public void run() {
         tts = new TextToSpeech(getApplicationContext(), listener);
        }
    });

Now our engine is ready to speak, we need simply pass the string we want to read.

tts.speak(text to read,TextToSpeech.QUEUE_FLUSH, null, null);

But before tts.speak, it is important to check for the audio focus change request. It is important because only one audio source can have focus at a time. You can check it using below code.

private AudioManager.OnAudioFocusChangeListener afChangeListener =
           new AudioManager.OnAudioFocusChangeListener() {
                 public void onAudioFocusChange(int focusChange) {
                                                        //check for focus
                                                   }
                                           };

OnAudioFocusChangeListener is called when audio focus of the system is changed and according to value of focusChange either we stop TTS or keep using it.

AudioManager audiofocus = (AudioManager)                                    getSystemService(Context.AUDIO_SERVICE);

audiofocus is instance of AudioManager class. We need it to call requestAudioFocus method of AudioManager class. requestAudioFocus method returns the status of request for audio focus change. This method requires three parameter  instance of AudioManager.OnAudioFocusChangeListener, stream type and duration hint. If request is granted only then we can we can use tts.speak .

int result = audiofocus.requestAudioFocus(afChangeListener,AudioManager.STREAM_MUSIC, AudioManager.AUDIOFOCUS_GAIN);

if (result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) {

tts.speak(text to read,TextToSpeech.QUEUE_FLUSH, null, null);

}

We were continuously facing issue ‘delay in speech response’ because voiceReply method implementation was wrong. We were initializing TextToSpeech on each call of voiceReply method and since onInit method runs on main thread causing delay in voice response. So I removed it and instead of initializing tts each time I used the tts instance already initialized when activity create.

 String spoken = reply;

textToSpeech.speak(spoken, TextToSpeech.QUEUE_FLUSHnull);

You can also control how the engine read text. Like we can modify pitch and speech rate.

tts.setPitch((float)pitch);

tts.setSpeechRate((float)speed);

Resource

Continue ReadingImplementing Text-to-Speech (TTS) in SUSI Android

Integration of Susi AI to Gitter

This blog post discusses the development of Susi Messenger bot on Gitter. It replies instantly to the messages sent to it, using the Susi API. The Streaming API notifies us when a user messages to the SUSI chat room. The REST API helps to message back with a reply from SUSI API, to the SUSI chat room.

Feel free to message to the already made SUSI AI account on Gitter and have a chat with it.

Prerequisites

  1. Basic knowledge about calling API’s and fetching data or posting data to the API.
  2. Node.js language.
  3. Github
  4. Heroku

Figure – Architecture for running SUSI AI on different messaging services.

This blog post will walk you through each of the steps required to integrate SUSI AI to Gitter:

Setup SUSI AI Bot on Gitter

  1. Create a Github or twitter account with a username having ‘Susi’ as its substring because this is the name that will be shown with the reply string we will get from Susi AI.
  2. Now you need to sign in to Gitter with a twitter or Github account from here.
  3. Create your community by visiting this page. After writing your community name press next, invite the people you want to be in this room and press next. You will be redirected to your communities lobby. This lobby is the chat room to which we will deploy our SUSI AI.
  4. Now visit the Gitter developer page, press sign in on the top right. You will be redirected to your apps page. Copy the personal access token written there as shown in this image: (The area colored black will have your access token).

5. On a new tab, in your browser visit   https://api.gitter.im/v1/rooms?access_token=YOUR_ACCESS_TOKEN, with YOUR_ACCESS_TOKEN replaced by the token we just copied.

A JSON object will be shown on our browser screen. You will see the value of ‘name’ key as YOUR_COMMUNITY_NAME/Lobby. Copy the id of this chat room, as we will need it later. You can refer to the image below, you will have your chat room id in the area colored black.

  1. Create a new heroku app here.

This app will accept the requests from Gitter and Susi api.

  1. Set the config variables for this heroku app in the setting tab of your account. Set ROOM_ID to the id of the chat room and TOKEN to the personal access token, we copied in steps 4 and 5.

These were the formalities to be done to have our chat bot account on Gitter.

  1. Let’s jump to the code part of how this integration will be done:

To use the two config variables set in Heroku, we need these two lines in our Node js code:

var roomId = process.env.ROOM_ID;
var token = process.env.TOKEN;

We need to set up an options variable with our access token and room id in it:

// Setting the options variable to use it in the https request block
var options = {
    hostname: 'stream.gitter.im',
    port:     443,
    path:     '/v1/rooms/' + roomId + '/chatMessages',
    method:   'GET',
    headers:  {'Authorization': 'Bearer ' + token}
};

We will send this options variable when making a request so that Gitter lets our request through and notifies us when a client messages to our SUSI chat room.

The res.on(‘data’) accepts a function which is called when a client messages to our SUSI chat room:

// making a request to gitter stream API
var req = https.request(options, function(res) {
 res.on('data', function(chunk) {
    // do stuff
 }
}

req.on('error', function(e) {
 console.log('Hey something went wrong: ' + e.message);
});

req.end();

According to the docs of REST API in Gitter, the JSON data that we receive when a client sends a message to a chat room is like this:

To get this response set in a variable, we can use this code snippet:

res.on('data', function(chunk) {
   var msg = chunk.toString();
   if(msg != " \n"){              // If message is not an empty message
     var jsonMsg = JSON.parse(msg);

Now we have this json response as shown in the above figure in the jsonMsg variable. To extract the client’s message from this json object:

var clientMsg = jsonMsg.text;

As we now have the user query in clientMsg variable. Let’s call Susi API and fetch an answer from it for a query.

As an example, let’s first take the query as ‘hi’ and visit http://api.asksusi.com/susi/chat.json?q=hi from the browser. We will get a JSON object as follows:

{
        "query": "hi",
        "count": 1,
        "client_id": "aG9zdF8xMDMuMjkuMjIyLjE4MA==",
        "query_date": "2017-07-17T02:29:44.171Z",
        "answers": [{
            "data": [{
                "0": "hi",
                "token_original": "hi",
                "token_canonical": "hi",
                "token_categorized": "hi",
                "timezoneOffset": "-330",
                "language": "en"
            }],
            "metadata": {"count": 1},
            "actions": [{
                    "type": "answer",
                    "language": "de",
                    "expression": "Hallo!"
            }],
   "skills": ["/susi_skill_data/models/general/smalltalk/de/German-Standalone-aiml2susi.txt"]
        }],
        "answer_date": "2017-07-17T02:29:44.179Z",
        "answer_time": 8,
        "language": "en",
        "session": {"identity": {
            "type": "host",
            "name": "103.29.222.180",
            "anonymous": true
        }}
    }

The answer can be found as the value of the key named expression. In this case, it is “Hallo!”.

To fetch the answer through coding for our client message, we can use this code snippet in Node js:

// including request module
var request = require(‘request’);

// setting options to make a successful call to Susi API.
var susiOptions = {
            method: 'GET',
            url: 'http://api.asksusi.com/susi/chat.json',
            qs:
            {
                timezoneOffset: '-330',
                q: clientMsg  //the client message sent to SUSI chat room.
            }
        };

// A request to the Susi bot
request(susiOptions, function (error, response, body) {
    if (error)
        throw new Error(error);
    // answer fetched from susi
    ans = (JSON.parse(body)).answers[0].actions[0].expression;
}

The properties required for the call are set up through a JSON object (i.e. susiOptions). Pass the susiOptions object to our request function as its 1st parameter. The response from the API will be stored in ‘body’ variable. We need to parse this body, to be able to access the properties of that body object. Hence, fetching the answer from Susi API.

As we now have the answer, let’s call the API of Gitter to show our answer back to the user. Let’s code the request for that:

// To send a reply by Susi AI to client's message back to Gitter
         var gitterOptions = {
                               method: 'POST',
                               url: 'https://api.gitter.im/v1/rooms/'+roomId+'/chatMessages',
                               headers:
                               {
                                 'authorization': 'Bearer '+ token ,
                                 'content-type': 'application/json',
                                 'accept': 'application/json'
                               },
                               body:
                               {
                                 text: ans
                               },
                               json: true
                             };

         // making the request to Gitter API
         request(gitterOptions, function (error, response, body) {
           if(error)
             throw new Error(error);
           console.log(body);
         });

Hence, we have made the basic chat work!

The streaming API of Gitter notifies us for every message sent to our chat room. We will also be notified about the reply message sent by ourselves. To not fall into an infinite loop of answers and questions by SUSI itself, we must include this line in our code:

res.on('data', function(chunk) {
   var msg = chunk.toString();
   if(msg != ” \n”){              // If message is not an empty message
     var jsonMsg = JSON.parse(msg);
     if(jsonMsg.fromUser.displayName != 'SusiAI'){ // If it’s not our own answer
        // do stuff 
     }
   }
}

req.on('error', function(e) {
 console.log('Hey something went wrong: ' + e.message);
});

req.end();

The display name in my case is ‘SusiAI’, but it may be different in your case according to the Github or Twitter id made by you.

  1. Upload this code to Github.
  2. Connect the Heroku app to the Github repository, which has your code.

  1. Deploy on the development branch. If you intend to contribute, it is recommended to Enable Automatic Deploys.

Branch Deployment.

Successful Deployment.

  1. Go to your Gitter room created and enjoy chatting with Susi.Resources
Continue ReadingIntegration of Susi AI to Gitter

Uploading images to Dropbox from the Phimpme App

The Phimpme Android application along with the camera, gallery and image editing features offers the option to upload images to many social media and cloud storages without having to install several other applications. As we can see from the screenshot below, Phimpme application contains a user-friendly accounts screen to connect to the accounts using a username and password so that we can upload photos from the share screen to that particular account later on.

One such famous cloud storage is the Dropbox and in this tutorial, I am explaining how I implemented the account authentication and image uploading feature to Dropbox in the Phimpme Android application.

Step 1

The first thing we need to do is to create an application in the Dropbox application console and to get the app key and the API secret key which we will require later on for the authentication purposes. To create an application on the Dropbox application console page,

  1. Go to this link. It will open a page as depicted in the screenshot below:
  2. Now click on the Dropbox API option.
  3. Click on App folder – access to a single folder created specifically for your app.
  4. Write the name of your application and press the create app button.

After this, we will be redirected to the page which will contain all the keys required to authenticate and upload photos.

Step 2

After getting the keys, the next thing we need to do is install the Dropbox SDK. To do this:

  1. Download the Android SDK from this link and extract it.
  2. Copy the dropbox-android-sdk-1.6.3.jar and json_simple-1.1.jar file to the libs folder.
  3. Click on the add as library button by right clicking on the jar files added.
  4. Copy the below-mentioned code in the AndroidManifest.xml file which defines the dropbox login activity within a new activity tag.
android:name="com.dropbox.client2.android.AuthActivity"
android:launchMode="singleTask"
android:theme="@android:style/Theme.Translucent.NoTitleBar"
android:configChanges="orientation|keyboard">
<intent-filter>
     <!-- Change this to be db- followed by your app key -->
         <data android:scheme="db-app_key" />
         <action android:name="android.intent.action.VIEW" />
         <category android:name="android.intent.category.BROWSABLE"/>
         <category android:name="android.intent.category.DEFAULT" />
</intent-filter>

In the 7th line of the code snippet, replace the app_key with the key you received from following the step 1.

Step 3

After setting up everything, we need to extract the access token for the user to upload the photos in that particular account. To do this, we can make use of the below code snippet, which uses the dropbox SDK we installed in step 2 to create an object named mDBApi and initialises it to authenticate the user.

private DropboxAPI<AndroidAuthSession> mDBApi;AppKeyPair appKeys = new AppKeyPair(APP_KEY, APP_SECRET);
AndroidAuthSession session = new AndroidAuthSession(appKeys);
mDBApi = new DropboxAPI<AndroidAuthSession>(session);

After initialisation in the onCreate method of the activity, we can authenticate the user using the following line of code.

mDBApi.getSession().startOAuth2Authentication(MyActivity.this);

This will open up a window where the user will be prompted to login to their dropbox account. After the login is finished, we will be taken back to the activity which made the authentication call, so in the onResume method, we need to get the access token of the user which will be used later on to upload the images using the following code snippet provided below:

mDBApi.getSession().finishAuthentication();
String accessToken = mDBApi.getSession().getOAuth2AccessToken();

After we have stored the access token, we can upload the selected image to the Dropbox using the following line of code.

File file = new File("working-draft.jpg");
FileInputStream inputStream = new FileInputStream(file);
Entry response = mDBApi.putFile("/magnum-opus.jpg", inputStream,
                              file.length(), null, null);

For more information on uploading and retrieving data from the Dropbox account, you can go to the dropbox developer guide and for working example refer to the Phimpme Android repository in the resources below.

Resources

  1. Phimpme Repo : Phimpme Android github repository.
  2. Dropbox official documentation : https://www.dropbox.com/developers-v1/core/start/android
  3. Dropbox application console : https://www.dropbox.com/developers/apps
  4. Stackoverflow example to upload image on Dropbox : https://stackoverflow.com/questions/10827371/upload-photo-to-dropbox-by-android
Continue ReadingUploading images to Dropbox from the Phimpme App

Face detection in Phimpme Android’s Camera

The Phimpme Android application comes with a well-featured camera application which offers almost all the functionality an advanced camera user searches for. It comes with a wide range of options to apply different scene modes in the camera and also to detect the faces using the front or the back camera of the device. In this tutorial, I will be discussing how we achieved the face detection functionality in Phimpme.

In the Phimpme application, we have the option in the settings to enable the face detection just as depicted in the screenshot below. After enabling it the Camera starts detecting the faces and draws rectangular boxes on the number of faces detected by the camera.

I will be explaining step by step to achieve this using some code snippets.

Step 1

First, we have to check whether our device supports the face detection functionality to avoid unnecessary application crashes using the Android’s Camera.Parameters class.

After the check we have to create a new class named My FaceDetectionListener which will be implementing the Android’s Camera.FaceDetectionListener. The face detection class overrides the function onFaceDetection and passes the array of Faces detected and the camera as the parameter to this function.

class MyFaceDetectionListener implements CameraController.FaceDetectionListener {
  @Override  
  public void onFaceDetection(CameraController.Face[] faces) { 
    faces_detected = new CameraController.Face[faces.length];     System.arraycopy(faces, 0, faces_detected, 0, faces.length);
  }
 }

Step 2

After creating this class, we need to start the camera of the application to set the face detection listener to it. This can be done by the code snippet provided below

camera = Camera.open(cameraId);

We can open the front camera and the back camera by simply changing the cameraId. If we want to open the front camera, then we need to set the camera Id value as 1 and if we want the back camera to open up we can set the camera Id to be 0.
After this, we can set the face detection listener in the camera. This can be done using the below code snippet.

mCamera.setFaceDetectionListener(fDetectionListener);
   mCamera.startFaceDetection();

The set face detection listener function takes in the object of the class we created in step 1 as the parameter and calls the Android’s pre defined function to start the face detection. The object of the class we created in step 1 can be created and initialised with the help of code snippet below.

MyFaceDetectionListener fDListener = new MyFaceDetectionListener();

After we have set the detection listener in the camera, as soon as it detects the face, it will call the overridden function onFaceDetection but how do the user know if the face has been detected or not. For this we have to create a rectangular box of size approximately that of the face detected. This can be done with the following code snippet.

int l = faces[i].rect.left;
               int r = faces[i].rect.right;
               int t = faces[i].rect.top;
               int b = faces[i].rect.bottom;
               Rect uRect = new Rect(l0, t0, r0, b0);

To get the full source code, please check out the Phimpme Android github repository.

Resources

  1. Phimpme Android Github repository
  2. Complete tutorial on face detection in Android
  3. Leafpic github repository
  4. Android Camera API Google developer page
Continue ReadingFace detection in Phimpme Android’s Camera

Multiple Color Effects in Phimpme Camera

The Phimpme Android’s camera comes with an option to switch between various color effects along with various other functionalities. To select different color modes, we have added a toggle button at the top right corner of the camera interface and which switches from the range of color effect available and on long clicking the toggle button, it resets the effect to normal. To show the functionality of the toggle button we have made use of the Showcase view in the application which displays all the functionality of the toggle button on the first run of the application.

In this tutorial, I will be discussing how we implemented the color effects feature in the Phimpme Android application with the help of some code snippets.

Step 1

Firstly we have to create a toggle button in the camera interface and have to set the onclicklistener on it to change the various color effects on the button click. This can be done with the following code snippet.

toggle.setOnClickListener(new View.OnClickListener() {
  @Override
  public void onClick(View v) {
       //Actions here
}

Similarly, we have to set the long click listener on the toggle button which will handle the reset color effects function in the application.

Step 2

The next thing we need is to extract all the color modes supported by the device and to create an Arraylist of it so that we can call the respective values by just increasing the index on toggle button click. This can be done with the help of the following code snippet.

Now we have all the supported color modes along with the normal mode stored in the values list. For instance,

  1. Mono
  2. Negative
  3. Solarize
  4. Sepia
  5. Neon

Hence on button click, we have to get the color values using the list index and we have to set the value to the camera parameter from where we extracted the supported color effects.

For this, we can make use of a static variable named colorNum and initialise it with 0 and on button click we can just increment this variable by 1 and can set the color effect using the code snippet provided below

final String color = colorEffect.get(colorNum);
CameraController.SupportedValues supported_values = camera_controller.setColorEffect(color);
if (supported_values != null) {
    color_effects = supported_values.values;
   applicationInterface.setColorEffectPref(supported_values.selected_value);
}

And on the long click listener method of the camera, we can set the value of the variable to be 0 and can set the color values accordingly.

To get the full source code on changing the color effects in the camera and to know about adding the showcase view which we have used in this to show the functionality. Please refer to the Phimpme Android repository.

Resources

  1. Open camera Github repository
  2. Color effects in Android camera
  3. Camera API developer page
  4. Amlcurran Showcaseview

 

Continue ReadingMultiple Color Effects in Phimpme Camera

Using Variables in a SUSI skill

One of the best feature provided in making a skill is the ease of using variables. From storing the favourite book of the user to the most recent movie he searched for to the mood he is in, variables play an indispensable part. If any problem is faced with the code part, the skill referred in this blog is coded in this file in susi_skill_data repository

This link refers to the official docs of SUSI, which walk you through some basic examples of how to use variables in a SUSI skill. Great skills can be achieved using them like the skill below:

It’s easy to make such skills by using variables. Let’s check it out how this skill can be achieved.

To store value in a variable we use this syntax during the skill development

^value^>_variableName

First, let’s save the favourite dish of the user and then we will try to surprise him/her with a witty answer.

I love * dish
^$1$^>_userFavouriteDish

So, if the user types “I love biryani dish”, $1$ will be equal to biryani. Let’s save it to _userFavouriteDish variable.

Now if user asks “What should i eat” to SUSI, I bet SUSI will answer a well calculated answer!

What should i eat?
I am sure you will love $_userFavouriteDish$!

Another example that can answer back the user efficiently:

How to cook biryani?

#Gives recipies and links to cook a dish
* cook *
!console:To cook  $title$ , check out $href$ and make sure you have $ingredients$! ^$2$^>_recentSearch
{
"url":"http://www.recipepuppy.com/api/?q=$2$",
"path":"$.results"
}
eol

In the above code, we saved the dish searched for at the end of the output.

If somehow user ends up asking “what is the most recent dish i searched for”. It’s skill will be:

what is the most recent dish I searched for?
It was $_recentSearch$

Even if before asking this question, user asks “how to cook sushi”. The _recentSearch variable will be overridden with value “sushi” instead of “biryani”. Hence, SUSI won’t mistake answering “most recent dish” as “sushi”!

Now I think we are bit comfortable with use of variables in a skill. Let’s get back to our target skill i.e. remembering skill. We store the thing asked to remember in a variable having the same name as of that thing and the statement related to it as the value of that variable. Examples:

Remember that my keys are on the table. So the variable will be named “keys” and it’s value will be “on the table”.

Remember that my birthday is on 20th of December. So the variable will be named “birthday” and it’s value will be “on 20th of December”.

Remember that my meetings are at 8 pm with mentors and at 9:30 pm with Shruti. So the variable will be named “meetings” and it’s value will be “at 8 pm with mentors and at 9:30 pm with Shruti”.

Hence the skill:

Remember that my * is * | Remember that my * is *
Okay, remembered!^$2$^>_$1$

When the user will ask for any of its thing, we will just show the value of the variable having the same name as of the thing asked. Examples:

#$_keys$ will be our answer
Where are my keys?
On the table                   

#$_meetings$ will be our answer
When are my meetings?
at 8 pm with mentors and at 9:30 pm with Shruti

Hence the skill which answers the question is:

when are my * | where is my * | where are my *
$_$1$$

So the skill as a whole will be:

Remember that my * is * | Remember that my * is *
Okay, remembered!^$2$^>_$1$

when are my * | where is my * | where are my *
$_$1$$

Resources

Continue ReadingUsing Variables in a SUSI skill

Generating responsive email using mjml in Yaydoc

In Yaydoc, an email with a download, preview and deploy link will be sent to the user after documentation is generated. But then initially, Yaydoc was sending email in plain text without any styling, so I decided to make an attractive HTML email template for it. The problem with HTML email is adding custom CSS and making it responsive, because the emails will be seen on various devices like mobile, tablet and desktops. When going through the GitHub trending list, I came across mjml and was totally stunned by it’s capabilities. Mjml is a responsive email generation framework which is built using React (popular front-end framework maintained by Facebook)

Install mjml to your system using npm.

npm init -y && npm install mjml

Then add mjml to your path

export PATH="$PATH:./node_modules/.bin”

Mjml has a lot of react components pre-built for creating the responsive email. For example mj-text, mj-image, mj-section etc…

Here I’m sharing the snippet used for generating email in Yaydoc.

<mjml>
  <mj-head>
    <mj-attributes>
      <mj-all padding="0" />
      <mj-class name="preheader" color="#CB202D" font-size="11px" font-family="Ubuntu, Helvetica, Arial, sans-serif" padding="0" />
    </mj-attributes>
    <mj-style inline="inline">
      a { text-decoration: none; color: inherit; }
 
    </mj-style>
  </mj-head>
  <mj-body>
    <mj-container background-color="#ffffff">
 
      <mj-section background-color="#CB202D" padding="10px 0">
        <mj-column>
          <mj-text align="center" color="#ffffff" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" padding="18px 0px">Hey! Your documentation generated successfully<i class="fa fa-address-book-o" aria-hidden="true"></i>
 
          </mj-text>
        </mj-column>
      </mj-section>
      <mj-section background-color="#ffffff" padding="20px 0">
        <mj-column>
          <mj-image src="http://res.cloudinary.com/template-gdg/image/upload/v1498552339/play_cuqe89.png" width="85px" padding="0 25px">
</mj-image>
 
          <mj-text align="center" color="#EC652D" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" vertical-align="top" padding="20px 25px">
            <strong><a>Preview it</a></strong>
            <br />
          </mj-text>
        </mj-column>
        <mj-column>
          <mj-image src="http://res.cloudinary.com/template-gdg/image/upload/v1498552331/download_ktlqee.png" width="100px" padding="0 25px" >
        </mj-image>
          <mj-text align="center" color="#EC652D" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" vertical-align="top" padding="20px 25px">
            <strong><a>Download it</a></strong>
            <br />
          </mj-text>
        </mj-column>
        <mj-column>
          <mj-image src="http://res.cloudinary.com/template-gdg/image/upload/v1498552325/deploy_yy3oqw.png" width="100px" padding="0px 25px" >
        </mj-image>
          <mj-text align="center" color="#EC652D" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" vertical-align="top" padding="20px 25px">
 
            <strong><a>Deploy it</a></strong>
            <br />
          </mj-text>
        </mj-column>
      </mj-section>
      <mj-section background-color="#333333" padding="10px">
        <mj-column>
        <mj-text align="center" color="#ffffff" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" padding="18px 0px">Thanks for using Yaydoc<i class="fa fa-address-book-o" aria-hidden="true"></i>
        </mj-column>
        </mj-text>
      </mj-section>
    </mj-container>
  </mj-body>
</mjml>

The main goal of this example is to make a responsive email which looks like the image given below. So, In mj-head tag, I have imported all the necessary fonts using the mj-class tag and wrote my custom CSS in mj-style. Then I made a container with one row and one column using mj-container, mj-section and mj-column tag and changed the container background color to #CB202D using background-color attribute, then In that container I wrote a heading which says `Hey! Your documentation generated successfully`  with mj-text tag, Then you will get the red background top bar with the success message. Then moving on to the second part, I made a container with three columns and added one image to each column using mj-image tag by specifying image URL as src attribute, added the corresponding text below the mj-image tag using the mj-text tag. At last,  I  made one more container as the first one with different message saying `Thanks for using yaydoc`  with background color #333333

At last, transpile your mjml code to HTML by executing the following command.

mjml -r index.mjml -o index.html

Rendered Email
Resources:

Continue ReadingGenerating responsive email using mjml in Yaydoc

The Mission Mars Challenge with NodeJS and Open Source Bot Framework Emulator

Commissioned under the top-secret space project, our first human team had set foot months ago. This mission on the red planet begun with the quest to establish civilization by creating our first outpost on an extraterrestrial body. Not so long ago, the mission control lost contact with the crew, and we are gathering the best of mankind to help save this mission.

In this rescue mission, you will learn to create a bot using an open source framework and tools. You will be given access to our code repositories and other technical resources. We have 3 mission and 2 code challenge to solve in order to bring the Mars mission back on track.

We need you! Be the first to crack the problems and rescue the compromised mission! Your bounty awaits! Receive your mission briefing at the control centre after checking-in at the FOSS Asia Summit!

How to enter:

  1. Join us on March 18 at Foss Asia Summit (Singapore Science Center), Tinker Lab (Hall E) at the following timeslots:3
    • 9:30
    • 11:30
    • 13:30
  1. Bring your own PC or load one from the mission control. We provide internet access at the lab room.
  2. Fill up the registration form and check in with the form at the Mission Control.
  3. Mission briefing will be provided, you will be given access to the github where you mission resources will be provided, and you can proceed to crack the challenges.
  4. Badge of honors to be earned and bounty awaits the team with the best-time!
  5. Winners to be announced at 17:30! Be there!

Installations needed:

  1. NodeJS (https://nodejs.org/en/)
  2. Any Code Editor (Visual Studio Code/Atom/Sublime Text etc.)
  3. Open Source Bot Framework Emulator (https://emulator.botframework.com/)
Continue ReadingThe Mission Mars Challenge with NodeJS and Open Source Bot Framework Emulator

FOSSASIA Summit 2017 Singapore – Call for Speakers

The FOSSASIA OpenTechSummit is Asia’s leading Open Technology conference for developers, startups, and IT professionals. In 2017 the event will take place from March 17th – 19th at the Science Centre Singapore.

During three days, thousands of developers, technologists, scientists, entrepreneurs and artists get together to showcase latest technologies, communicate, exchange ideas, learn from each other, and collaborate. Topics range from information technology and Open Source software development to hardware and maker projects, open design tools, machine learning, DevOps, knowledge tools, and citizen science.

For our 2017 feature event we are looking for speaker submissions for the following tracks:

* Open Source Software
* Design, Art & Culture,
* Internet, Society & Politics,
* Hardware & Making,
* Health and Technology
* Science
* Kernel Track and
* Startup and Business Development

Apart from the conference program, the FOSSASIA Summit offers an exhibition space for company and project stands and areas for community assemblies, and developer meetings.

Submission Guidelines

Please propose your session as early as possible and include a description of your session proposal that is as complete as possible. The description is of particular importance for the selection. Once accepted, speakers will receive a code for a speakers ticket. Please indicate on the submissions form if you would like to apply for a sponsored community ticket.

Submission Link: 2017.fossasia.org/speaker-registration

Dates & deadlines

Please send us your proposal as soon as possible via the FOSSASIA Summit speaker registration.

December 20th, 2016: Deadline for submissions
January 18th, 2017: Notification of acceptance
March 17th – 19th, 2017: FOSSASIA OpenTechSummit

Sessions and Tracks

Talks and Workshops
Talk slots are 20 minutes long plus 5-10 minutes for questions and answers. You can also sign up for either a 1-hour long or a 2-hours workshop. Longer sessions are possible in principle. Please tell us the proposed length of your session at the time of submission.

Lightning talks
You have some interesting ideas but do not want to submit a full talk? We suggest you go for a lightning talk which is a 5 minutes slot to present your idea or project. You are welcome to continue the discussion in break out areas. There are tables and chairs to serve your get-togethers.

Stands and assemblies
We offer spaces in our exhibition area for companies, community projects, installations, workshops, team gatherings and other fun activities. We are curious to know what you would like to make, bring or show. Please add details in the submission form.

Developer Rooms/Track Hosts
Get in touch early if you plan to organize a developer room at the event. FOSSASIA is also looking for team members who are interested to co-host and moderate tracks. Please sign up to become a host here.

Publication

Audio and video recordings of the lectures will be published in various formats under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. This license allows commercial use by media institutions as part of their reporting. If you do not wish for material from your lecture to be published or streamed, please let us know in your submission.

Sponsorship & Contact

If you would like to sponsor FOSSASIA or have any questions, please contact us via [email protected]

Links

FOSSASIA Summit 2017: 2017.fossasia.org

FOSSASIA Summit 2016 Event: Wrap-Up

FOSSASIA Photos: flickr.com/photos/fossasia/

FOSSASIA Videos: Youtube FOSSASIA

FOSSASIA on Twitter: twitter.com/fossasia

FOSSASIA Coding Contest: codeheat.org

Continue ReadingFOSSASIA Summit 2017 Singapore – Call for Speakers