Working with Activity API in Open Event API Server


Recently, I added the Activities API with documentation and dredd tests for the same in
Open Event API Server. The Activity Model in the Open Event Server is basically a log of all the things that happen while the server is running, like – event updates, speaker additions, invoice generations and other similar things. This blogpost explains how to implement Activity API in the Open Event API Server’s nextgen branch. In the Open Event Server, we first add the endpoints, then document these so that the consumers ( Open Event Orga App, Open Event Frontend) find it easy to work with all the endpoints.

We also test the documentation against backend implementation to ensure that a end-developer who is working with the APIs is not misled to believe what each endpoint actually does in the server.

We also test the documentation against backend implementation to ensure that a end-developer who is working with the APIs is not misled to believe what each endpoint actually does in the server.
The Activities API endpoints are based on the Activity database model. The Activity table has four columns – 
id, actor, time, action, the names are self-explanatory. Now for the API schema, we need to make fields corresponding these columns.
Since id is auto generated, we do not need to add it as a field for API. Also the activity model’s __init__ method stamps time with the current system time. So this field is also not required in the API fields. We are left with two fields- actor and action.

Defining API Schema

Next, we define the API Schema class for Activities model. This will involve a Meta class and fields for the class.The Meta class contains the metadata of the class. This includes details about type_,

self_view, self_view_kwargs and an inflect parameter to dasherize the input fields from request body.

We define the four fields – id, actor, time and action according to marshmallow fields based on the data type and parameters from the activities model. Since id, actor and action are string columns and time is a DateTime column, the fields are written as following:

The id field is marked as dump only because it is a read-only type field. The other fields are marked with allow_none as they are all non-required field.

ActivityList Class:
The activity list class will provide us with the endpoint: “/v1/activities”

This endpoint will list all the activities. Since we wanted only GET requests to be working for this, so defined method = [‘GET’, ] for ResourceList. The activities are to be internally created based on different actions like creating an event, updating an event, adding speakers to sessions and likewise. Since the activities are to be shown only to the server admin, 
is_admin permission is used from the permission manager.

ActivityDetail Class:

The activity detail gives methods to work with an activity based on the activity id.
The endpoint provided is :  ‘/v1/activity/<int:activity_id>’

Since this is also an admin-only accessible GET only endpoint the following was written:

Writing Documentation:

The documentation is written using API Blueprint. Since we have two endpoints to document : /v1/activities and /v1/activities/<int:activity_id> both GET only.

So we begin by defining the ‘Group Activities’ , under which we first list  ‘Activity Collection’ which essentially is the Activity List class.

For this class, we have the endpoint:  /v1/activities. This is added for GET request. The parameters – actor, time and action are described along with description, type and whether they are required or not.

The request headers are written as part of the docs followed by the expected response.

Next we write the ‘Activity Details’ which represents the ActivityDetail class. Here the endpoint /v1/activities/<int:activity_id> is documented for GET. The parameter here is activity_id, which is the id of the activity to get the details of.

Writing DREDD Test for Documentation

To imitate the request responses, we need a faker script which creates an object of the the class we are testing the docs for then makes the request. For this we use FactoryBoy and dredd hooks to insert data into the database.

Defining Factory Model for Activity db model

The above is the factory model for activity model. It is derived from

factory.alchemy.SQLAlchemyModelFactory. The meta class defines the db model and sqlalchemy session to be used. The actor and action have dummy strings as part of the request body.

Writing Hooks
Now to test these endpoints we need to add objects to the database so that the GET requests have an object to fetch. This is done by dredd hooks. Before each request, an object of the corresponding factory class is initialised and committed into the database. Thus a dummy object is available for dredd to test on. The request is made and the real output is compared with the expected output written in the API Blueprint documentation.

This is what the hooks look like for  this endpoint: GET /activities

Now if the expected responses and actual responses match, the dredd test successfully passes. This dredd test in run on each build of the project on Travis to ensure that documented code does exactly what is says!

This concludes the process to write an API right from Schema to Resources and Documentation and Dredd tests.

Additional Resources:

Continue ReadingWorking with Activity API in Open Event API Server

Zooming Feature in the Phimpme Android’s Camera

The Phimpme Android application comes with a complete package of camera, Edit images, sharing and gallery functionalities. It has a well featured and fully functional camera with all the capabilities that a user expects from a camera application. One such feature in the Phimpme Android application is the zooming functionality. It provides the user the option to zoom in using the pinch gesture of the fingers or the user can select the settings to zoom in from the volume buttons. In this tutorial, I will be explaining how I achieved the zooming functionality in the Phimpme Android app.

Step 1

The first thing we need to do is to check whether the device will support the zoom in functionality or not to avoid random crashes while runtime of the application and while performing the zoom action in case the camera of the device doesn’t support this feature. This can be done by the following lines of code:

Camera.Parameters params = mCamera.getParameters();
Boolean supports = params.isZoomSupported();

Step 2

Now after getting the camera parameters and checking whether the camera supports the zoom in functionality, we need to add the touch listener to the surface view of the camera so that we can get the touch locations and the finger spacing of the user to get the pinch to zoom in functionality. This can be done using the following line of code.

surfaceView.setOnTouchListener(this);

Whenever the user touches the screen this touch listener gives a callback to the overridden onTouchEvent method and passes the MotionEvent to the function. The motion event object in Android handles the movement reports. Now in the onTouchEvent method, we calculate the finger spacing between the two fingers and calculate the approximate amount by which the user wants to zoom in. The finger spacing can be calculated using the following lines of code.

float x = event.getX(0) - event.getX(1);
   float y = event.getY(0) - event.getY(1);
   return FloatMath.sqrt(x * x + y * y);

After getting the finger spacing we need to cancel the auto focus of the camera before performing the zoom action so that the application does not crash. This can be achieved by a single line of code below.

mCamera.cancelAutoFocus();

Step 3

The final step is to set the zoom level in the camera application by calculating the zoom level by using the finger spacing. For this, first we need to get the max zoom level supported by the device so that we do not apply the zoom level that is not supported by the device. The calculation of max zoom level and setting of the desired zoom level by the user can be performed by using the following lines of code.

int maxZoom = params.getMaxZoom();
   int zoom = params.getZoom();
   float newDist = getFingerSpacing(event);
   if (newDist > mDist) {
       //zoom in
       if (zoom < maxZoom)
           zoom++;
   } else if (newDist < mDist) {
       //zoom out
       if (zoom > 0)
           zoom--;
   }
   mDist = newDist;
   params.setZoom(zoom);

This is how we have achieved the functionality of zooming in and clicking pictures in the Phimpme Android application. To get the full source code and to know how to use the volume control buttons to zoom in/out, please refer to the Phimpme Android repository.

Resources

  1. GitHub – Open camera source code : https://github.com/almalence/OpenCamera
  2. Android developer’s guide – MotionEvents in Android : https://developer.android.com/reference/android/view/MotionEvent.html
  3. StackOverflow – Pinch to zoom functionality : https://stackoverflow.com/questions/8120753/android-camera-preview-zoom-using-double-finger-touch
  4. GitHub – Phimpme Android repository : https://github.com/fossasia/phimpme-android
Continue ReadingZooming Feature in the Phimpme Android’s Camera

Encoding and Decoding Images as Data in UserDefaults in SUSI iOS

In this blog post, I will be explaining how to encode and decode images and save them in UserDefaults so that the image persists even if it is removed from the Photos app. It happens a number of times that images are removed from the gallery by the users which results in the app loosing the image. So, to avoid this, we save the image by encoding it in a data object and save it inside UserDefaults. In SUSI iOS app we simply select an image from the image picker, encode it and save it in UserDefaults. To set the image, we simply fetch the image data from the UserDefaults and decode it to an image.

There are two ways we can do the encoding and decoding process:

  • Using Data object
  • Using Base64 string

For the scope of this tutorial, we will use the Data object.

Implementation Steps

  1. To use the image picker, we need to add permissions to `Info.plist` file.
<key>NSLocationWhenInUseUsageDescription</key>
<string>Susi is requesting to get your current location</string>
<key>NSPhotoLibraryUsageDescription</key>
<string>Susi needs to request your gallery access to select wallpaper</string>
  1. Select image from gallery

First, we present an alert which gives an option to select the image from the gallery.

// Show wallpaper options to set wallpaper or clear wallpaper
func showWallpaperOptions() {
  let imageDialog = UIAlertController(title: ControllerConstants.wallpaperOptionsTitle, message: nil, preferredStyle: UIAlertControllerStyle.alert)
  imageDialog.addAction(UIAlertAction(title: ControllerConstants.wallpaperOptionsPickAction, style: .default, handler: { (_: UIAlertAction!) in
  imageDialog.dismiss(animated: true, completion: nil)
  self.showImagePicker()
  }))
  imageDialog.addAction(UIAlertAction(title: ControllerConstants.wallpaperOptionsNoWallpaperAction, style: .default, handler: { (_: UIAlertAction!) in
    imageDialog.dismiss(animated: true, completion: nil)
    self.removeWallpaperFromUserDefaults()
  }))
  imageDialog.addAction(UIAlertAction(title: ControllerConstants.dialogCancelAction, style: .cancel, handler: { (_: UIAlertAction!) in
    imageDialog.dismiss(animated: true, completion: nil)
  }))
  self.present(imageDialog, animated: true, completion: nil)
}

Here, we create and UIAlertController with three options to select, one which presents the image picker controller, the second one removes the background wallpaper and the third dismisses the alert.

  1. Set the image as background view
// Callback when image is selected from gallery
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
  dismiss(animated: true, completion: nil)
  let chosenImage = info[UIImagePickerControllerOriginalImage] as? UIImage
  if let image = chosenImage {
    setBackgroundImage(image: image)
  }
}

We use the `didFinishPickingMediaWithInfo` delegate method to set the image as background. First we get the image using the the `info` dictionary using the `UIImagePickerControllerOriginalImage` key.

  1. Save the image in UserDefaults (encoding)
// Save image selected by user to user defaults
func saveWallpaperInUserDefaults(image: UIImage!) {
  let imageData = UIImageJPEGRepresentation(image!, 1.0)
  let defaults = UserDefaults.standard
  defaults.set(imageData, forKey: userDefaultsWallpaperKey)
}

We first convert the image to a data object using the `UIImageJPEGRepresentation` method followed by saving the data object in UserDefaults with the key `wallpaper`.

  1. Decode the data object back to UIImage 

Now whenever we need to decode the image, we simply get the data object from the UserDefaults and use it to display the image.

// Check if user defaults have an image data saved else return nil/Any
func getWallpaperFromUserDefaults() -> Any? {
  let defaults = UserDefaults.standard
  return defaults.object(forKey: userDefaultsWallpaperKey)
}

Below is the output when an image is selected and displayed as a background.

Resources:

Continue ReadingEncoding and Decoding Images as Data in UserDefaults in SUSI iOS

Using The Dark and Light Theme in SUSI iOS

SUSI being an AI for interactive chat bots, provides answers to the users in the intelligent way. So, to make the SUSI iOS app more user friendly, the option of switching between themes was introduced. This also enables the user switch between themes based on the environment around. Any user can switch between the light and dark themes easily from the settings.

We start by declaring an enum called `theme` which contains two strings namely, dark and light.

enum theme: String {
    case light
    case dark
}

We can update the color scheme based on the theme selected very easily by checking the currently active theme and based on that check, we update the color scheme. To check the currently active theme, we define a variable in the `AppDelegate` which holds the value.

var activeTheme: String?

Below is the way the color scheme of the LoginViewController is set.

var activeTheme: String?func setupTheme() {
  let image = UIImage(named: ControllerConstants.susi)?.withRenderingMode(.alwaysTemplate)
  susiLogo.image = image
  susiLogo.tintColor = .white
  UIApplication.shared.statusBarStyle = .lightContent
  let activeTheme = AppDelegate().activeTheme
  if activeTheme == theme.light.rawValue {
    view.backgroundColor = UIColor.lightThemeBackground()
  } else if activeTheme == theme.dark.rawValue {
    view.backgroundColor = UIColor.darkThemeBackground()
  }
}

Here, we first get the image and set the rendering mode to `alwaysTemplate` so that we can change the tint color of the image. Next, we assign the image to the `IBOutlet` and change the tint color to `white`. We also change the status bar style to `lightContent`. Next, we check the active theme and change the view’s background color accordingly. For this method to execute, we call it inside, `viewDidLoad` so that the theme loads up as the view loads.

Next, lets add this option of switching between themes inside the `SettingsViewController`. We add a cell with `titleLabel` as `Change Theme` and use the collectionView’s delegate method of `didSelect` to show an alert. This alert contains three options, Dark theme, Light Theme and Cancel. Let’s code that method which presents the alert.

func themeToggleAlert() {
  let imageDialog = UIAlertController(title: ControllerConstants.toggleTheme, message: nil, preferredStyle: UIAlertControllerStyle.alert)
  imageDialog.addAction(UIAlertAction(title: theme.dark.rawValue.capitalized, style: .default, handler: { (_: UIAlertAction!) in
    imageDialog.dismiss(animated: true, completion: nil)
    AppDelegate().activeTheme = theme.dark.rawValue
    self.settingChanged(sender: self.imagePicker)
    self.setupTheme()
  }))
  imageDialog.addAction(UIAlertAction(title: theme.light.rawValue.capitalized, style: .default, handler: { (_: UIAlertAction!) in
    imageDialog.dismiss(animated: true, completion: nil)
    AppDelegate().activeTheme = theme.light.rawValue
    self.settingChanged(sender: self.imagePicker)
    self.setupTheme()
  }))
  imageDialog.addAction(UIAlertAction(title: ControllerConstants.dialogCancelAction, style: .cancel, handler: { (_: UIAlertAction!) in
    imageDialog.dismiss(animated: true, completion: nil)
  }))
  self.present(imageDialog, animated: true, completion: nil)
}

Here, we assign the alert view’s title and add 3 actions and their respective completion handlers. If we see inside these completion handlers, we can notice that we first dismiss the alert followed by updating the activeTheme variable in AppDelegate and call the `settingChanged` function which updates the user’s settings on the server. Finally, we update the color scheme.

Now, if we build and run the app and change the theme from the settings, we will notice that on returning to the chat view, the color scheme is not updated. The reason here is that we are setting up the theme on viewDidLoad which loads only once and is not executed until the controller is presented again. Here, we make use of the `viewDidAppear` method which executes every time the view appears.

override func viewDidAppear(_ animated: Bool) {
  super.viewDidAppear(animated)
  setupTheme()
}

To persist the selected theme, we used the UserDefaults to save the theme which got assigned everytime to the `activeTheme` variable when the app loads up.

UserDefaults.standard.set(AppDelegate().activeTheme, forKey: ControllerConstants.UserDefaultsKeys.theme)

On app launch, we assigned this User Default the value of the light theme as a default.

Below is the final output:

References:

Continue ReadingUsing The Dark and Light Theme in SUSI iOS

Implementing an Interface for Reading Configuration from a YAML File for Yaydoc

Yaydoc reads configuration specified in a YAML file to set various options during the build process. This allows users to customize various properties of the build process. The current implementation for this was very basic. Basically it uses a pyYAML, a yaml parser for python to read the file and convert it to a python dictionary. From the dictionary we extracted values for various properties and converting them to strings using various heuristics such as converting True to ”true”, False to ”false”, a list to comma separated string and None to an empty string. Finally, we exported variables with those values.

Recently the entire code for this was rewritten using object-oriented paradigm. The motivation for this came from the fact that the implementation lacked certain features and also required some refactoring for long term readability. In the following paragraph, I have discussed the new implementation.

Firstly a Configuration class was created which basically wraps around a dictionary and provide certain utility methods. The primary difference is that the Configuration class allows dotted key access. This means that you can use the following syntax to access nested keys.

theme = conf[‘build.theme.name’]

The class provides another method connect which is used to connect environment variables with configuration values. This method also takes a dotted key but provides an extension on top of that to handle the case when a certain option can take multiple values. For example,

option: my_option

Or,

option:
  - my_option1
  - my_option2

To indicate that a certain config is of this type, you can specify a “@” character at the end of the key. Anything after the “@” character is assumed to be an attribute of each element within the list. Let’s see an example of this whole process.

build:
  subproject:
    - url: <url1>
  source: “doc”
    - url: <url2>

Now to extract all urls from the above file, we’d need to do the following

config.connect(‘SUBPROJECT_URLS’, ‘build.subproject@url’)

To extract sources, we’ll also use the default parameter as the source option is optional.

config.connect(‘SUBPROJECT_SOURCES’, build.subproject@source’, default=’docs’)

Finally, The Configuration object also provides a getenv method which reads all connection and serializes values to string according to the previously described heuristics. It then returns a dictionary of all environment variables which must be set.

Resources

Continue ReadingImplementing an Interface for Reading Configuration from a YAML File for Yaydoc

Advanced functionality in SUSI FBbot

SUSI AI is integrated to Facebook (blog). During the initial phase, SUSI FBbot had basic UI and functionalities like just “plain text” replies. Facebook provides many more features like replies enclosed in templates (blog link), sharing the replies by SUSI A.I. with friends, get started button or a persistent menu to show quick reply options to the user etc. All these features to enhance the user experience with SUSI AI chatbot.

This blog post walks you through on adding these functionalities to the SUSI FBbot:

Adding Get Started button

A Get Started button is added to the SUSI FBbot to give the user a brief introduction about SUSI AI and what the user can try next.

Clicking on the get started button , will send the message as “Get Started” to the SUSI FBbot:

The reply message, provides the user with options to visit SUSI A.I. repository or to just start chatting.

To have this button in our bot, we use this code snippet:

// Add a get started button to the messenger
request({
    url: 'https://graph.facebook.com/v2.6/me/messenger_profile',
    qs: {access_token:token},
    method: 'POST',
    json: { 
      "get_started":{
        "payload":"GET_STARTED_PAYLOAD"
      }
    }
}, function(error, response, body) {
    // handle errors and response here
})

When a user clicks this button, a postback is sent to the webhook of SUSI FBbot with payload as “GET_STARTED_PAYLOAD”. When we receive such postback, we reply with a message as shown above using generic template.

Adding a persistent menu to the bot

If not at the start, while chatting with SUSI AI for sometime, it is quite possible that the user becomes curious to visit the repository of SUSI A.I. . So we need a quick access to the “Visit repository” button all the time. Persistent menu, helps us with this feature:

This way it is accessible at each point of time. Some other buttons can also be added to the menu like “Latest News” as shown in the image above.

To have a persistent menu for the SUSI FBbot, the following code snippet is used:

request({
        url: 'https://graph.facebook.com/v2.6/me/messenger_profile',
        qs: {access_token:token},
        method: 'POST',
        json: {
                "persistent_menu":[{
                    "locale":"default", 
                        "composer_input_disabled":false,
                        "call_to_actions":[{
                            "type":"web_url",
                            "title":"Visit Repository",
                            "url":"https://github.com/fossasia/susi_server",
                            "webview_height_ratio":"full"
                        }]
                 }]
            }
    }, function(error, response, body) {
        // handle errors and response
    })

We can add more buttons to the menu. JSON object having the required properties of that button can be appended to the key “call_to_actions” to do so.

Adding a messenger code to join SUSI FBbot

To enable Facebook users to chat with SUSI AI by scanning a code through messenger. This feature is added to the bot by making the following POST request:

request({
        url: 'https://graph.facebook.com/v2.6/me/messenger_codes',
        qs: {access_token:token},
        method: 'POST',
        json: {
                type: "standard",
                image_size: 1000
        }
    }, function(error, response, body) {
        // handle errors and response.
});

Adding message sharing feature

To increase the reach of SUSI A.I. to more users on Facebook, message sharing proves to be a big boon. The reply by SUSI A.I. to users can be shared with their friends. Along with the message we can also send a promotional message(related to SUSI A.I.), to the people with which the message was shared.

This sharing can end up having more users trying SUSI A.I., leading to increase the user base of SUSI AI and its popularity.

We can allow sharing of just the message(i.e. the reply) or a promotional message with it. In case of just the reply:

Clicking the share button, will share just the reply with another person. To add capabilities of sharing the reply along with one more message(prompting to try SUSI A.I.), some changes to the code are done:

We need to set the buttons property in generic template like:

buttons : [
            {
                "type":"element_share",
                    "share_contents": { 
                      "attachment": {
                        "type": "template",
                        "payload": {
                          "template_type": "generic",
                          "elements": [
                            {
                              "title": "I had an amazing chat with SUSI.",
                              "buttons": [
                                {
                                  "type": "web_url",
                                  "url": "https://m.me/asksusisu", 
                                  "title": "Chat with SUSI AI"
                                }
                              ]
                            }
                          ]
                        }
                      }
                   }
            } 
       ];

This way when a user shares the message with other, an extra message is sent with the original message, tempting the user to try a chat with SUSI A.I.:

Resources:

  1. By Seth Rosenberg from Facebook developers blogLink Ads to Messenger, Enhanced Mobile Websites, Payments and More.
  2. By Slobodan Stojanović from smashing magazineDevelop a chat bot with node js.
Continue ReadingAdvanced functionality in SUSI FBbot

How to Make Phimpme Android App Crash Free

Now Phimpme Android app is almost ready with lots of social sharing options. A user can upload images on multiple platforms like Tumblr, Flickr, Imgur, OwnCloud (open source), Nextcloud, dropbox, pinterest, etc. Apart from Sharing, Phimpme app also allow user to click image from own custom camera with different filters and various editing options. As everything is now almost ready so It also important to make app stable and crash free. To make app stable to compatible with all types of device, we can write instrumentation test cases. So in this post I will be explaining how I made Phimpme android app crash free. To do so I have integrated crash reporting service in Phimpme using Firebase Crash report service and Crashlytics.

Using Firebase Crash Reporting service:

Firebase is free of cost and provide various features along with crash reporting. To integrate firebase crash service there is step by step guide.

Step 1:

First, step is to register your app on firebase developer console. To register your Android app on firebase click here.  Add your app name and select your country.

 

Step 2:

Next, click on the Add Firebase to your Android app button and fill in the your Android application’s package name and the SHA-1 key. You can generate this key very easily with the help of Android studio. Type this command in your terminal to generate SHA-1

keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey -storepass android -keypass android

On successful completion of the command, the SHA-1 key will be displayed on the terminal.

Step 3:

Now add the SHA-1 and package name in firebase console. After that download googleservice.json file and place in app folder of your project.

Step 4:

Add following dependency in your android project and plugin in build.gradle

dependencies {
    classpath 'com.google.gms:google-services:3.1.0'
  }
 apply plugin: 'com.google.gms.google-services'


Step 5:

Once you have done the above four steps your app will be visible in firebase console and now you can add crash service. Now you can see crash in your firebase console

After this add the following dependency in build.gradle. This is very important.

compile 'com.google.firebase:firebase-crash:9.4.0'

Resources:

Continue ReadingHow to Make Phimpme Android App Crash Free

Auto Deployment of SUSI Server using Kubernetes on Google Cloud Platform

Recently, we auto deployed SUSI Server on Google Cloud Platform using Kubernetes and Docker Images after each commit in the GitHub repo with the help of Travis Continuous Integration. So, basically, whenever a new commit is added to the repo, during the Travis build, we build the docker image of the server and then use it to deploy the server on Google Cloud Platform. We use Kubernetes for deployment since it is very easy to scale up the Project when traffic on the server is increased and Docker because using it we can easily build docker images which then can be used to update the deployment. This schematic will make things more clear what exactly is the procedure.

Prerequisites

  1. You must be signed in to your Google Cloud Console and have enabled billing and must have credits left in your account.
  2. You must have a docker account and a repo in it. If you don’t have one, make it now.
  3. You should have enabled Travis on your repo and have a Travis.yml file in your repo.
  4. You must already have a project in Google Cloud Console. Make a new one if you don’t have.

Pre Deployment Steps

You will be needed to do some work on Google Cloud Platform before actually starting the auto deployment process. Those are:

  1. Creating a new Cluster.
  2. Adding and Formatting Persistence Disk
  3. Adding a Persistent Volume CLaim (PVC)
  4. Labeling a node as primary.

Check out this documentation on how to do that. It may help.

Implementation

Img src: https://cloud.google.com/solutions/continuous-delivery-with-travis-ci

1. The first step is simply to add this line in Travis.yml file and create an empty deploy.sh, file mentioned below.

after_success:
- bash kubernetes/travis/deploy.sh

Now we’ll be moving line by line and adding commands in the empty deploy.sh file that we created in the previous step.

2. Next step is to remove obsolete Google Cloud files and install Google Cloud SDK and kubectl command. Use following lines to do that.

echo ">>> Removing obsolete gcoud files"
sudo rm -f /usr/bin/git-credential-gcloud.sh
sudo rm -f /usr/bin/bq
sudo rm -f /usr/bin/gsutil
sudo rm -f /usr/bin/gcloud

echo ">>> Installing new files"
curl https://sdk.cloud.google.com | bash;
source ~/.bashrc
gcloud components install kubectl

3. In this step you will be needed to download a JSON file which contains your Google Cloud Credentials, then copy that file to your repo and encrypt it using Travis encryption keys. Follow https://youtu.be/7U4jjRw_AJk this video to see how to do that.

4. So, now you have added your encrypted credentials.json files in your repo and now you need to use those credentials to login into your google cloud account. So, use below lines to do that.

echo ">>> Decrypting credentials and authenticating gcloud account"
# Decrypt the credentials we added to the repo using the key we added with the Travis command line tool
openssl aes-256-cbc -K $encrypted_YOUR_key -iv $encrypted_YOUR_iv -in ./kubernetes/travis/Credentials.json.enc -out Credentials.json -d
gcloud auth activate-service-account --key-file Credentials.json
export GOOGLE_APPLICATION_CREDENTIALS=$(pwd)/Credentials.json
#add gcoud project id
gcloud config set project YOUR_PROJECT_ID
gcloud container clusters get-credentials YOUR_CONTAINER

The above lines of code first decrypt your credentials, then login into your account and set the project you already created earlier.

5. Now, we have logged into Google Cloud, we need to build docker image from a dockerfile. Follow official docker docs to see how to write a dockerfile. Here is an example of dockerfile. You will need to add “$DOCKER_USERNAME” and “$DOCKER_PASSWORD” as environment variables in Travis settings of your repo.

echo ">>> Building Docker image"
cd kubernetes/images

docker build --no-cache -t YOUR_DOCKER_USERNAME/YOUR_DOCKER_REPO:$TRAVIS_COMMIT .
docker login -u="$DOCKER_USERNAME" -p="$DOCKER_PASSWORD"
docker tag YOUR_DOCKER_USERNAME/YOUR_DOCKER_REPO:$TRAVIS_COMMIT YOUR_DOCKER_USERNAME/YOUR_DOCKER_REPO:latest

6. Now, just push the docker image created in previous step and update the deployment.

echo ">>> Pushing docker image"
docker push YOUR_DOCKER_USERNAME/YOUR_DOCKER_REPO

echo ">>> Updating deployment"
kubectl set image deployment/YOUR_CONTAINER_NAME --namespace=default YOUR_CONTAINER_NAME=YOUR_DOCKER_USERNAME/YOUR_DOCKER_REPO:$TRAVIS_COMMIT

Summary

This blog was about how we have configured travis build and auto deployed SUSI Server on Google Cloud Platform using Kubernetes and Docker. You can do the same with your server too or if you are looking to contribute to SUSI Server, this may help you a little in understanding the code of the repo.

Resources

  1. The documentation for setting up your project on Google CLoud Console before starting auto deployment https://github.com/fossasia/susi_server/blob/afb00cd9c421876f5d640ce87941e502aa52e004/docs/installation/installation_kubernetes_gcloud.md
  2. The documentation for encrypting your google cloud credentials and adding them to your repo https://cloud.google.com/solutions/continuous-delivery-with-travis-ci
  3. Docs for Docker to get you started with Docker https://docs.docker.com/
  4. Travis Documentation on how to secure your credentials https://docs.travis-ci.com/user/encryption-keys/
  5. Travis Documentation on how to add environment variables in your repo settings https://docs.travis-ci.com/user/environment-variables/
Continue ReadingAuto Deployment of SUSI Server using Kubernetes on Google Cloud Platform

SPI Communication in PSLab

PSLab supports communication using the Serial Peripheral Interface (SPI) protocol. The Desktop App as well as the Android App have the framework set-up to use this feature. SPI protocol is mainly used by a few sensors which can be connected to PSLab. For supporting SPI communication, the PSLab Communication library has a dedicated class defined for SPI. A brief overview of how SPI communication works and its advantages & limitations can be found here.

The class dedicated for SPI communication with numerous methods defined in them. The methods required for a particular SPI sensor may differ slightly, however, in general most sensors utilise a certain common set of methods. The set of methods that are commonly used are listed below with their functions.

In the setParameters method, the SPI parameters like Clock Polarity (CKP/CPOL), Clock Edge (CKE/CPHA), SPI modes (SMP) and other parameters like primary and secondary prescalar which are specific to the device used.

Primary Prescaler (0,1,2,3) for 64MHz clock->(64:1,16:1,4:1,1:1)

Secondary prescaler (0,1,..7)->(8:1,7:1,..1:1)

The values of CKP/CPOL and CKE/CPHA needs to set using the following convention and according to our requirements.

  • At CPOL=0 the base value of the clock is zero, i.e. the idle state is 0 and active state is 1.
    • For CPHA=0, data is captured on the clock’s rising edge (low→high transition) and data is changed at the falling edge (high→low transition).
    • For CPHA=1, data is captured on the clock’s falling edge (high→low transition) and data is changed at the rising edge (low→high transition).
  • At CPOL=1 the base value of the clock is one (inversion of CPOL=0), i.e. the idle state is 1 and active state is 0.
    • For CPHA=0, data is captured on the clock’s falling edge (high→low transition) and data is changed at the rising edge (low→high transition).
    • For CPHA=1, data is captured on the clock’s rising edge (low→high transition) and data is changed at the falling edge (high→low transition).

public void setParameters(int primaryPreScalar, int secondaryPreScalar, Integer CKE, Integer CKP, Integer SMP) throws IOException {
        if (CKE != null) this.CKE = CKE;
        if (CKP != null) this.CKP = CKP;
        if (SMP != null) this.SMP = SMP;

        packetHandler.sendByte(commandsProto.SPI_HEADER);
        packetHandler.sendByte(commandsProto.SET_SPI_PARAMETERS);
        packetHandler.sendByte(secondaryPreScalar | (primaryPreScalar << 3) | (this.CKE << 5) | (this.CKP << 6) | (this.SMP << 7));
        packetHandler.getAcknowledgement();
    }

 

The start method is responsible for sending the instruction to initiate the SPI communication and it takes the channel which will be used for communication as input.

public void start(int channel) throws IOException {
        packetHandler.sendByte(commandsProto.SPI_HEADER);
        packetHandler.sendByte(commandsProto.START_SPI);
        packetHandler.sendByte(channel);
    }

 

The setCS method is responsible for selecting the slave with which the SPI communication has to be done. This feature of SPI communication is known as Chip Select (CS) or Slave Select (SS). A master can use multiple Chip/Slave Select pins for communication whereas a slave utilises just one pin as SPI is based on single master multiple slaves principle. The capacity of PSLab is limited to two slave devices at a time.

public void setCS(String channel, int state) throws IOException {
        String[] chipSelect = new String[]{"CS1", "CS2"};
        channel = channel.toUpperCase();
        if (Arrays.asList(chipSelect).contains(channel)) {
            int csNum = Arrays.asList(chipSelect).indexOf(channel) + 9;
            packetHandler.sendByte(commandsProto.SPI_HEADER);
            if (state == 1)
                packetHandler.sendByte(commandsProto.STOP_SPI);
            else
                packetHandler.sendByte(commandsProto.START_SPI);
            packetHandler.sendByte(csNum);
        } else {
            Log.d(TAG, "Channel does not exist");
        }
    }

 

The stop method is responsible for sending the instruction to the stop the communication with the slave.

public void stop(int channel) throws IOException {
        packetHandler.sendByte(commandsProto.SPI_HEADER);
        packetHandler.sendByte(commandsProto.STOP_SPI);
        packetHandler.sendByte(channel);
    }

 

PSLab SPI class has methods defined for sending either 8-bit or 16-bit data over SPI which are further classified on whether they request the acknowledgement byte (it helps to know whether the communication was successful or unsuccessful) or not.

The methods are so named send8, send16, send8_burst and send16_burst . The burst methods do not request any acknowledgement value and as a result work faster than the normal methods.

public int send16(int value) throws IOException {
        packetHandler.sendByte(commandsProto.SPI_HEADER);
        packetHandler.sendByte(commandsProto.SEND_SPI16);
        packetHandler.sendInt(value);
        int retValue = packetHandler.getInt();
        packetHandler.getAcknowledgement();
        return retValue;
    }

 

Resources:

Continue ReadingSPI Communication in PSLab

Implementing Search Bar Using GitHub API In Yaydoc CI

In Yaydoc’s, documentation will be generated by typing the URL of the git repository to the input box from where user can generate documentation for any public repository, they can see the preview and if they have access, they can push the documentation to the github pages on one click. But In Yaydoc CI user can register the repository only if he has access to the specific repository, so I decided to show the list to the repository where user can select from the list but then it also has one problem that Github won’t give us all the user repository data in one api hit and then I made a search bar in which user can search repository and can register to the Yaydoc CI.

var search = function () {
  var username = $("#orgs").val().split(":")[1];
  const searchBarInput = $("#search_bar");
  const searchResultDiv = $("#search_result");

  searchResultDiv.empty();
  if (searchBarInput.val() === "") {
    searchResultDiv.append('<p class="text-center">Please enter the repository name<p>');
    return;
  }

  searchResultDiv.append('<p class="text-center">Fetching data<p>');

  $.get(`https://api.github.com/search/repositories?q=user:${username}+fork:true+${searchBarInput.val()}`, function (result) {
    searchResultDiv.empty();
    if (result.total_count === 0) {
      searchResultDiv.append(`<p class="text-center">No results found<p>`);
      styles.disableButton("btnRegister");
    } else {
      styles.disableButton("btnRegister");
      var select = '<label class="control-label" for="repositories">Repositories:</label>';
      select += '<select class="form-control" id="repositories" name="repository" required>';
      select += `<option value="">Please select</option>`;
      result.items.forEach(function (x){
        select += `<option value="${x.full_name}">${x.full_name}</option>`;
      });
      select += '</select>';
      searchResultDiv.append(select);
    }
  });
};

$(function() {
  $("#search").click(function () {
    search();
  });
});

In the above snippet I have defined search function which will get executed when user clicks the search button. The search function will get the search query from input box, if the search query is empty it’ll show the message as “Please enter repository name”, if it is not empty it’ll hit the GitHub API to fetch user repositories. If the GitHub returns empty array it’ll show “No results found”. In between searching time “Fetching data” will be shown.

$('#search_bar').on('keyup keypress', function(e) {
    var keyCode = e.keyCode || e.which;
    if (keyCode === 13) {
      search();
    }
  });

  $('#ci_register').on('keyup keypress', function(e) {
    var keyCode = e.keyCode || e.which;
    if (keyCode === 13) {
      e.preventDefault();
      return false;
    }
  });

Still we faced some problem, like on click enter button form is automatically submitting. So I’m registering event listener. In that listener I’m checking whether the key code is 13 or not. Key code 13 represent enter key, so if the key code is 13 then i’ll prevent the form from submitting. You can see the working of the search bar in the Yaydoc CI.

Resources:

Continue ReadingImplementing Search Bar Using GitHub API In Yaydoc CI