Deploy to Azure Button for loklak

In this blog post, am going to tell you about yet a new deployment method for loklak which is easy and quick with just one click. Deploying to Azure Websites from a Git repository just got a little easier with the Deploy to Azure Button. Simply place the button in README.md with a link to the loklak, and users who click on it will be directed to a streamlined deployment process. If we want to do something more advanced and customize this behavior, then add an ARM template called “azuredeploy.json” at the root of the repository which will cause users to be presented with different inputs and configure your services as specified.

I’m going to walk you through a workflow that I used to test them before checking them in to my repo, as well as describe some of the special behaviors that the “Deploy to Azure” site does

Adding a button

To add a deployment button, insert the following markdown to your README.md file:

[![Deploy to Azure](https://azuredeploy.net/deploybutton.svg)](https://deploy.azure.com/?repository=https://github.com/loklak/loklak_server)

How it works

When a user clicks on the button, a “referrer” header is sent to azuredeploy.net which contains the location of the Git repository of loklak_server to deploy from.

An Example Template

This is a blank template which shows, how the azure divides its inputs.

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
  },
  "variables": {
  },
  "resources": [
  ],
  "outputs": {
  }
}

By following the above template, in the case of loklak server, the parameters used are name, image i.e., docker image, port , number of CPUs to be utilized and space i.e., memory required.

In the resources section we use container, the type of the container will be

"type": "Microsoft.ContainerInstance/containerGroups",

 

And as output, we expect a public IP address to access the azure cloud instance created by us.

Everything under the root “parameters” property will be inputs into our template. Then these parameter values feed into the resources defined later in the template with the “[parameters(‘paramName’)]” syntax.

Try the “Deploy to Azure” Button here:



Resources

Continue ReadingDeploy to Azure Button for loklak

Performing Oscillator Experiments with PSLab

Using PSLab we can read the waveform generated by different Oscillators. First, let’s discuss what’s an Oscillator? An Oscillator is an electronic circuit that converts unidirectional current flow from a DC source into an alternating waveform. Oscillators can produce a sine wave, triangular wave or square wave. Oscillators are used in computers, clocks, watches, radios, and metal detectors. In this post, we are going to discuss 3 different types of Oscillators.

  • Colpitts Oscillator
  • Phase Shift Oscillator
  • Wien Bridge Oscillator

Colpitts Oscillator

The Colpitts oscillator produces sinusoidal oscillations. The Colpitts oscillator has a tank circuit which consists of two capacitors in series and an inductor connected in parallel to the serial combination. The two capacitors in series produce a 180o phase shift which is inverted by another 180o to produce the required positive feedback. The frequency of the oscillations is determined by the value of the capacitors and inductor in the tank circuit.

Image source

Image source

Phase Shift Oscillator

A phase-shift oscillator produces a sine wave output using regenerative feedback obtained from the combination of resistor and capacitor. This regenerative feedback from the RC network is due to the ability of the capacitor to store an electric charge.

Image source

Wien bridge oscillator

A Wien bridge oscillator generates sine waves. It can generate a large range of frequencies and is based on a bridge circuit. It employs two transistors, each producing a phase shift of 180°, and thus producing a total phase-shift of 360° or 0°. It is simple in design, compact in size, and stable in its frequency output.

 

Image source

Mapping output waves from the Oscillator Circuits in PSLab Android app

To make PSLab Android app to support experiments related to read the waveforms received from the Oscillator we reused Oscilloscope Activity. In order to analyze the frequencies of the waves captured, we used sine fitting. Sine fitting function simply takes the data points and returns the amplitude, frequency, offset and phase shift of the wave.

The following is a glimpse of output signals from the Oscillators being captured by PSLab Android.

Resources

Read more on Oscillator from the following links

Continue ReadingPerforming Oscillator Experiments with PSLab

Auto Deployment of SUSI Web Chat on gh-pages with Travis-CI

SUSI Web Chat uses Travis CI with a custom build script to deploy itself on gh-pages after every pull request is merged into the project. The build system auto updates the latest changes hosted on chat.susi.ai. In this blog, we will see how to automatically deploy the repository on gh pages.

To proceed with auto deploy on gh-pages branch,

  1. We first need to setup Travis for the project.
  2. Register on https://travis-ci.org/ and turn on the Travis for this repository.

Next, we add .travis.yml in the root directory of the project.

# Set system config
sudo: required
dist: trusty
language: node_js

# Specifying node version
node_js:
  - 6

# Running the test script for the project
script:
  - npm test

# Running the deploy script by specifying the location of the script, here ‘deploy.sh’ 

deploy:
  provider: script
  script: "./deploy.sh"


# We proceed with the cache if there are no changes in the node_modules
cache:
  directories:
    - node_modules

branches:
  only:
    - master

To find the code go to https://github.com/fossasia/chat.susi.ai/blob/master/.travis.yml

The Travis configuration files will ensure that the project is building for every change made, using npm test command, in our case, it will only consider changes made on the master branch.

If one wants to watch other branches one can add the respective branch name in travis configurations. After checking for build passing we need to automatically push the changes made for which we will use a bash script.

#!/bin/bash

SOURCE_BRANCH="master"
TARGET_BRANCH="gh-pages"

# Pull requests and commits to other branches shouldn't try to deploy.
if [ "$TRAVIS_PULL_REQUEST" != "false" -o "$TRAVIS_BRANCH" != "$SOURCE_BRANCH" ]; then
    echo "Skipping deploy; The request or commit is not on master"
    exit 0
fi

# Save some useful information
REPO=`git config remote.origin.url`
SSH_REPO=${REPO/https:\/\/github.com\//git@github.com:}
SHA=`git rev-parse --verify HEAD`

ENCRYPTED_KEY_VAR="encrypted_${ENCRYPTION_LABEL}_key"
ENCRYPTED_IV_VAR="encrypted_${ENCRYPTION_LABEL}_iv"
ENCRYPTED_KEY=${!ENCRYPTED_KEY_VAR}
ENCRYPTED_IV=${!ENCRYPTED_IV_VAR}
openssl aes-256-cbc -K $ENCRYPTED_KEY -iv $ENCRYPTED_IV -in deploy_key.enc -out ../deploy_key -d

chmod 600 ../deploy_key
eval `ssh-agent -s`
ssh-add ../deploy_key

# Cloning the repository to repo/ directory,
# Creating gh-pages branch if it doesn't exists else moving to that branch
git clone $REPO repo
cd repo
git checkout $TARGET_BRANCH || git checkout --orphan $TARGET_BRANCH
cd ..

# Setting up the username and email.
git config user.name "Travis CI"
git config user.email "$COMMIT_AUTHOR_EMAIL"

# Cleaning up the old repo's gh-pages branch except CNAME file and 404.html
find repo/* ! -name "CNAME" ! -name "404.html" -maxdepth 1  -exec rm -rf {} \; 2> /dev/null
cd repo

git add --all
git commit -m "Travis CI Clean Deploy : ${SHA}"

git checkout $SOURCE_BRANCH

# Actual building and setup of current push or PR.
npm install
npm run build
mv build ../build/

git checkout $TARGET_BRANCH
rm -rf node_modules/
mv ../build/* .
cp index.html 404.html

# Staging the new build for commit; and then committing the latest build
git add -A
git commit --amend --no-edit --allow-empty

# Deploying only if the build has changed
if [ -z `git diff --name-only HEAD HEAD~1` ]; then

  echo "No Changes in the Build; exiting"
  exit 0

else
  # There are changes in the Build; push the changes to gh-pages
  echo "There are changes in the Build; pushing the changes to gh-pages"

  # Actual push to gh-pages branch via Travis
  git push --force $SSH_REPO $TARGET_BRANCH
fi

This bash script will enable Travis CI user to push changes to gh pages, for this we need to store the credentials of the repository in encrypted form.

1. To get the public/private rsa keys we use the following command

ssh-keygen -t rsa -b 4096 -C "your_email@example.com"

2.  It will generate keys in .ssh/id_rsa folder in your home repository.

  1. Make sure you do not enter any passphrase while generating credentials otherwise Travis will get stuck at the time of decryption of the keys.
  2. Copy the public key and deploy the key to repository by visiting  

5. We also need to set the environment variable ENCRYPTED_KEY in Travis. Here’s a screenshot where to set it in the Travis repository dashboard.

6. Next, install Travis for encryption of keys.

sudo apt install ruby ruby-dev
sudo gem install travis

7. Make sure you are logged in to Travis, to login use the following command.

travis login

8. Make sure you have copied the ssh to deploy_key and then encrypt your private deploy_key and add it to root of your repository, use command –

travis encrypt-file deploy_key

9. After successful encryption, you will see a message

Please add the following to your build script (before_install stage in your .travis.yml, for instance):

openssl aes-256-cbc -K $encrypted_3dac6bf6c973_key -iv $encrypted_3dac6bf6c973_iv -in deploy_key.enc -out ../deploy_key -d
  1. Add the above-generated deploy_key in Travis and push the changes on your master branch. Do not push the deploy_key only the encryption file i.e., deploy_key.enc
  1. Finally, push the changes and create a Pull request and merge it to test the deployment. Visit Travis logs for more details and debugging.

Resources

Continue ReadingAuto Deployment of SUSI Web Chat on gh-pages with Travis-CI

Feeds Moderation in loklak Media Wall

Loklak Media Wall provides client side filters for entities received from loklak status.json API like blocking feeds from a particular user, removing duplicate feeds, hiding a particular feed post for moderating feeds. To implement it, we need pure functions which remove the requested type of feeds and returns a new array of feeds. Moreover, the original set of data must also be stored in an array so that if filters are removed, the requested data is provided to the user

In this blog, I would be explaining how I implemented client side filters to filter out a particular type of feeds and provide the user with a cleaner data as requested.

Types of filters

There are four client-side filters currently provided by Loklak media wall:

    • Profanity Filter: Checks for the feeds that might be offensive and removes it.
    • Remove Duplicate: Removes duplicate feeds and the retweets from the original feeds
    • Hide Feed: Removes a particular feed from the feeds
    • Block User: Blocks a User and removes all the feeds from the particular user

It is also important to ensure that on pagination, new feeds are filtered out based on the previous user requested moderation.

Flow Chart

The flow chart explains how different entities received from the server is filtered and how original set of entities is maintained so that if the user removes the filter, the original filtered entities are recovered.

Working

Profanity Filter

To avoid any obscene language used in the feed status to be shown up on media wall and providing a rather clean data, profanity filter can be used. For this filter, loklak search.json APIs provide a field classifier_profanity which states if there is some swear word is used in the status. We can check for the value of this field and filter out the feed accordingly.

export function profanityFilter(feeds: ApiResponseResult[]): ApiResponseResult[] {
const filteredFeeds: ApiResponseResult[] = [];
feeds.forEach((feed) => {
if ( feed.classifier_language !== null && feed.classifier_profanity !== undefined ) {
if (feed.classifier_profanity !== ‘sex’ &&  feed.classifier_profanity !== ‘swear’) {
filteredFeeds.push(feed);
}
}
else {
filteredFeeds.push(feed);
}
});
return filteredFeeds || feeds;
}

Here, we check if the classifier_profanity field is either not ‘swear’ or ‘sex’ which clearly classifies the feeds and we can push the status accordingly. Moreover, if no classifier_profanity field is provided for a particular field, we can push the feed in the filtered feeds.

Remove Duplicate

Remove duplicate filter removes the tweets that are either retweets or even copy of some feed and return just one original feed. We need to compare field id_str which is the status id of the feed and remove the duplicate feeds. For this filter, we need to create a map and compare feeds on map object and remove the duplicate feeds iteratively and return the array of feeds with unique elements.

export function removeDuplicateCheck(feeds: ApiResponseResult[]): ApiResponseResult[] {
const map = { };
const filteredFeeds: ApiResponseResult[] = [];
const newFeeds: ApiResponseResult[] = feeds;
let v: string;
for (let a = 0; a < feeds.length; a++) {
v = feeds[a].id_str;
if (!map[v]) {
filteredFeeds.push(feeds[a]);
map[v] = true;
}
}
return filteredFeeds;
}

Hide Feed

Hide Feed filter can be used to hide a particular feed from showing up on media wall. It can be a great option for the user to hide some particular feed that user might not want to see. Basically, when the user selects a particular feed, an action is dispatched with payload being the status id i.e. id_str. Now, we pass feeds and status id through a function which returns the particular feed. All the feeds with the same id_str are also removed from the feeds array.

export function hideFeed(feeds: ApiResponseResult[], statusId: string ): ApiResponseResult[] {
const filteredFeeds: ApiResponseResult[] = [];
feeds.forEach((feed) => {
if (feed.id_str !== statusId) {
filteredFeeds.push(feed);
}
});
return filteredFeeds || feeds;
}

User can undo the action and let the filtered feed again show up on media wall. Now, for implementing this, we need to pass original entities, filtered entities and the id_str of the particular entity through a function which checks for the particular entity with the same id_str and add it in the filtered entities and return the new array of filtered entities.

export function showFeed(originalFeeds: ApiResponseResult[], feeds: ApiResponseResult[], statusId: string ): ApiResponseResult[] {
const newFeeds = […feeds];
originalFeeds.forEach((feed) => {
if (feed.id_str === statusId) {
newFeeds.push(feed);
}
});
return newFeeds;
}

Block User

Block User filter can be used blocking feeds from a particular user/account from showing up on media wall. To implement this, we need to check for the User ID field user_id of the user and remove all the feeds from the same User ID. The function accountExclusion takes feeds and user_id (of the accounts) as a parameter and returns an array of filtered feeds removing all the feeds of the requested users/accounts.

export function accountExclusion(feeds: ApiResponseResult[], userId: string[] ): ApiResponseResult[] {
const filteredFeeds: ApiResponseResult[] = [];
let flag: boolean;
feeds.forEach((feed) => {
flag = false;
userId.forEach((user) => {
if (feed.user.user_id === user) {
flag = true;
}
});
if (!flag) {
filteredFeeds.push(feed);
}
});return filteredFeeds || feeds;
}

Key points

It is important to ensure that the new feeds (received on pagination or on a new query) must also be filtered according to the user requested filter. Therefore, before storing feeds in a state and supplying to templates after pagination, it must be ensured that new entities are also filtered out. For this, we need to keep boolean variables as a state property which checks if a particular filter is requested by a user and applies the filter to the new feeds accordingly and store filtered feeds in the filteredFeeds accordingly.

Also, the original feeds must be stored separately so that on removing filters the original feeds are regained. Here, entities stores the original entities received from the server.

case apiAction.ActionTypes.WALL_SEARCH_COMPLETE_SUCCESS: {
const apiResponse = action.payload;
let newFeeds = accountExclusion(apiResponse.statuses, state.blockedUser);
if (state.profanityCheck) {
newFeeds = profanityFilter(newFeeds);
}
if (state.removeDuplicate) {
newFeeds = removeDuplicateCheck(newFeeds);
}return Object.assign({}, state, {
entities: apiResponse.statuses,
filteredEntities: newFeeds,
lastResponseLength: apiResponse.statuses.length
});
}

case wallPaginationAction.ActionTypes.WALL_PAGINATION_COMPLETE_SUCCESS: {
const apiResponse = action.payload;
let newFeeds = accountExclusion(apiResponse.statuses, state.blockedUser);
if (state.profanityCheck) {
newFeeds = profanityFilter(apiResponse.statuses);
}
let filteredEntities = […newFeeds, state.filteredEntities];
if (state.removeDuplicate) {
filteredEntities = removeDuplicateCheck(filteredEntities);
}

return Object.assign({}, state, {
entities: [ apiResponse.statuses, state.entities ],
filteredEntities
});
}

Reference

Continue ReadingFeeds Moderation in loklak Media Wall

Showing Pull Request Build logs in Yaydoc

In Yaydoc, I added the feature to show build status of the Pull Request. But there was no way for the user to see the reason for build failure, hence I decided to show the build log in the Pull Request similar to that of TRAVIS CI. For this, I had to save the build log into the database, then use GitHub status API to show the build log url in the Pull Request which redirects to Yaydoc website where we render the build log.

StatusLog.storeLog(name, repositoryName, metadata,  `temp/admin@fossasia.org/generate_${uniqueId}.txt`, function(error, data) {
                            if (error) {
                              status = "failure";
                            } else {
                              targetBranch = `https://${process.env.HOSTNAME}/prstatus/${data._id}`
                            }
                            github.createStatus(commitId, req.body.repository.full_name, status, description, targetBranch, repositoryData.accessToken, function(error, data) {
                              if (error) {
                                console.log(error);
                              } else {
                                console.log(data);
                              }
                            });
                          });

In the above snippet, I’m storing the build log which is generated from the build script to the mongodb and I’m appending the mongodb unqiueID to the `prstatus` url so that we can use that id to retrieve build log from the database.

exports.createStatus = function(commitId, name, state, description, targetURL, accessToken, callback) {
  request.post({
    url: `https://api.github.com/repos/${name}/statuses/${commitId}`,
    headers: {
      'User-Agent': 'Yaydoc',
      'Authorization': 'token ' + crypter.decrypt(accessToken)
    },
    "content-type": "application/json",
    body: JSON.stringify({
      state: state,
      target_url: targetURL,
      description: description,
      context: "Yaydoc CI"
    })
  }, function(error, response, body) {
    if (error!== null) {
      return callback({description: 'Unable to create status'}, null);
    }
    callback(null, JSON.parse(body));
  });
};

After saving the build log, I’m sending the request to GitHub for showing the status of the build along with build log url where user can click the detail link and can see the build log.

Resources:

Continue ReadingShowing Pull Request Build logs in Yaydoc

Implementing Skill Detail Section in SUSI Android App

SUSI Skills are rules that are defined in SUSI Skill Data repo which are basically the responses SUSI gives to the user queries. When a user queries something from the SUSI Android app, a query to SUSI Server is made which further fetches response from SUSI Skill Data and gives the response to the app. Similarly, when we need to list all skills, an API call is made to server to list all skills. The server then checks the SUSI Skill Data repo for the skills and then return all the required information to the app. Then the app displays all the information about the skill to user. User then can view details of each skill and then interact on the chat interface to use that skill. This process is similar to what SUSI Skill CMS does. The CMS is a skill wiki like interface to view all skills and then edit them. Though the app can not be currently used to edit the skills but it can be used to view them and try them on the chat interface.

API Information

For listing SUSI Skill groups, we have to call on /cms/getGroups.json

This will give you all groups in SUSI model in which skills are present. Current response:

{
  "session": {"identity": {
    "type": "host",
    "name": "14.139.194.24",
    "anonymous": true
  }},
  "accepted": true,
  "groups": [
    "Small Talk",
    "Entertainment",
    "Problem Solving",
    "Knowledge",
    "Assistants",
    "Shopping"
  ],
  "message": "Success: Fetched group list"
}

So, the groups object gives all the groups in which SUSI Skills are located.

Next comes, fetching of skills. For that the endpoint is /cms/getGroups.json?group=GROUP_NAME

Since we want all skills to be fetched, we call this api for every group. So, for example we will be calling http://api.susi.ai/cms/getSkillList.json?group=Entertainment for getting all skills in group “Entertainment”. Similarly for other groups as well.

Sample response of skill:

{
  "accepted": true,
  "model": "general",
  "group": "Shopping",
  "language": "en",
  "skills": {"amazon_shopping": {
    "image": "images/amazon_shopping.png",
    "author_url": "https://github.com/meriki",
    "examples": ["Buy a dress"],
    "developer_privacy_policy": null,
    "author": "Y S Ramya",
    "skill_name": "Shop At Amazon",
    "dynamic_content": true,
    "terms_of_use": null,
    "descriptions": "Searches items on Amazon.com for shopping",
    "skill_rating": null
  }},
  "message": "Success: Fetched skill list",
  "session": {"identity": {
    "type": "host",
    "name": "14.139.194.24",
    "anonymous": true
  }}
}

It gives all details about skills:

  1. image
  2. author_url
  3. examples
  4. developer_privacy_policy
  5. author
  6. skill_name
  7. dynamic_content
  8. terms_of_use
  9. descriptions
  10. skill_rating

Implementation in SUSI Android App

Skill Detail Section UI of Google Assistant

Skill Detail Section UI of SUSI SKill CMS

Skill Detail Section UI of SUSI Android App

The UI of skill detail section in SUSI Android App is the mixture of UI of Skill detail section in Google Assistant ap and SUSI Skill CMS. It displays details of skills in a beautiful manner with horizontal recyclerview used to display the examples.

So, we have to display following details about the skill in Skill Detail Section:

  1. Skill Name
  2. Author Name
  3. Skill Image
  4. Try it Button
  5. Description
  6. Examples
  7. Rating
  8. Content type (Dynamic/Static)
  9. Terms of Use
  10. Developer’s Privacy policy

Let’s see the implementation.

1. Whenever a skill Card View is clicked, showSkillDetailFragment() is called and it opens a new instance of a fragment named SkillDetailsFragment which shows details of the skill. We have to provide necessary information while starting the fragment. This information is passed as a Serializable.

fun showSkillDetailFragment(skillData: SkillData, skillGroup: String) {
   val skillDetailsFragment = SkillDetailsFragment.newInstance(skillData,skillGroup)
   (context as SkillsActivity).fragmentManager.beginTransaction()
           .replace(R.id.fragment_container, skillDetailsFragment)
           .commit()
}

2.  The data which was passed as a Serializeable object is now casted back to the required form and a method to set up the UI is called.

companion object {
   val SKILL_KEY = "skill_key"
   val SKILL_GROUP = "skill_group"
   fun newInstance(skillData: SkillData, skillGroup: String): SkillDetailsFragment {
       val fragment = SkillDetailsFragment()
       val bundle = Bundle()
       bundle.putSerializable(SKILL_KEY, skillData as Serializable)
       bundle.putString(SKILL_GROUP, skillGroup)
       fragment.arguments = bundle

       return fragment
   }
}

override fun onCreateView(inflater: LayoutInflater, container: ViewGroup?, savedInstanceState: Bundle?): View {
   skillData = arguments.getSerializable(
           SKILL_KEY) as SkillData
   skillGroup = arguments.getString(SKILL_GROUP)
   return inflater.inflate(R.layout.fragment_skill_details, container, false)
}

override fun onViewCreated(view: View?, savedInstanceState: Bundle?) {
   setupUI()
   super.onViewCreated(view, savedInstanceState)
}

3. The setupUI() method then calls separate method for setting every part of the UI like image, name etc.

fun setupUI() {
   setImage()
   setName()
   setAuthor()
   setTryButton()
   setDescription()
   setExamples()
   setRating()
   setDynamicContent()
   setPolicy()
   setTerms()
}

4. One example of setting a part of the UI is setting Author name. It checks if AuthorName is null or not. After that it anchors author’s github account link with his/her name.

fun setAuthor() {
   skill_detail_author.text = "Author : ${activity.getString(R.string.no_skill_author)}"
   if(skillData.author != null && !skillData.author.isEmpty()){
       if(skillData.authorUrl == null || skillData.authorUrl.isEmpty())
           skill_detail_author.text = "Author : ${skillData.skillName}"
       else {
           skill_detail_author.linksClickable = true
           skill_detail_author.movementMethod = LinkMovementMethod.getInstance()
           if (android.os.Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {
               skill_detail_author.text = Html.fromHtml("Author : <a href=\"${skillData.authorUrl}\">${skillData.author}</a>", Html.FROM_HTML_MODE_COMPACT)
           } else {
               skill_detail_author.text = Html.fromHtml("Author : <a href=\"${skillData.authorUrl}\">${skillData.author}</a>")
           }
       }
   }
}

Summary

So, this blog talked about how the Skill detail section in SUSI Android App is implemented. This included how a network call is made, logic for making different network calls, making a horizontal recyclerview for displaying examples. So, If you are looking forward to contribute to SUSI Android App, this can help you a little. But if not so, this may also help you in understanding and how you can implement horizontal recyclerview similar to Google Play Store.

References

  1. To know about servlets https://en.wikipedia.org/wiki/Java_servlet
  2. To see how to implement one https://www.javatpoint.com/servlet-tutorial
  3. To see how to make network calls in android using Retrofit https://guides.codepath.com/android/Consuming-APIs-with-Retrofit
  4. To see how to implement custom RecyclerView Adapter https://www.survivingwithandroid.com/2016/09/android-recyclerview-tutorial.html
Continue ReadingImplementing Skill Detail Section in SUSI Android App

Implementing Event Copy API in Open Event Frontend

In Open Event Frontend, we give the organizer a facility to create a copy of the event by copying it and making the modifications he wants to a particular event. Thus, it is easy for the organizer to create multiple events with same sponsors, sessions, etc. For this, we implemented the event copy API in frontend.
We achieved the copy of events as follows:
Since the event copy API is application/json type, we used the simple GET and POST requests to copy the event rather than using the ember data. For this, we use the loader service which is injected throughout the app. To copy the event we have given a “Copy” button which looks as follows:

 <button class="ui button {{if isCopying 'loading'}}" {{action 'copyEvent'}} disabled={{isCopying}}>
    <i class="copy icon"></i>
        {{t 'Copy'}}
 </button>

Thus, we trigger an action ‘copyEvent’ on clicking the Copy button. The action is defined in controller as follows:

 copyEvent() {
      this.set('isCopying', true);
      this.get('loader')
        .post(`events/${this.get('model.id')}/copy`, {})
        .then(copiedEvent => {
          this.transitionToRoute('events.view.edit', copiedEvent.identifier);
          this.get('notify').success(this.l10n.t('Event copied successfully'));
        })
        .catch(() => {
          this.get('notify').error(this.l10n.t('Copying of event failed'));
        })
        .finally(() => {
          this.set('isCopying', false);
        });
    }

The endpoint to copy the event as defined in our API is:

POST : /v1/events/{identifier}/copy
Content-Type: application/vnd.api+json
Authorization: JWT <Auth Key>
Request body: {}

Thus, we make a post request to the given URL by passing the event id of the event to be copied and the request body to be an empty object. Thus, on successful response from the server, we get the new event id for which the event info is same. We then redirect the user to the edit details route where he can change the info he wants.
Thus, we copy the event in Open Event Frontend.

Resources: Docs on loader service in Ember JS

Continue ReadingImplementing Event Copy API in Open Event Frontend

Implementation of Image Scraper and Teaser Text in Query Server

Query server helps one to scrap search engines like Google, Yahoo, Bing, DuckDuckGo and get the results in json/ xml format. Also it stores retrieved results in mongoDB for analytical purposes.We have used beautiful soup for scraping results from query server

We have used beautiful soup for scraping results from query server

In this blogpost I will discuss two recent implementations in query server and then end with the introduction of different scrappers for query server.

Implementation of teaser text for Google, Yahoo, and DuckDuckgo:

Teaser text is basically description that is provided by search engines for all search results, this could be implemented by scrapping the description of each result and push it into a list. This is done in query server using beautiful.

Implementation details of this feature is available at pull:

https://github.com/fossasia/query-server/pull/72

And finally we have achieved scaping teaser text for all supported search engines:

Implementation of Image scraper for google in query server:

Scapping images in google is a bit different from scrapping normal text results. Google has metadata of the original image in rg_meta tag of div containing the thumbnail of the image. We cannot scrap just the thumbnail, because thumbnails are basically of low quality, and also are stored in google server, whereas the links available in the meta data are from the original source. Finally using the metadata of images available we have scraped the images in google.

Implementation details of image scraper to google is available at https://github.com/fossasia/query-server/pull/73

Also we have separated one scraper file for each of the search engine using Object Oriented Paradigm, where as before we used to have only one scraper file for all search engines https://github.com/fossasia/query-server/pull/67

Resources

BeautifulSoup: https://www.crummy.com/software/BeautifulSoup/bs4/doc/

Continue ReadingImplementation of Image Scraper and Teaser Text in Query Server

Performing Custom Experiments with PSLab

PSLab has the capability to perform a variety of experiments. The PSLab Android App and the PSLab Desktop App have built-in support for about 70 experiments. The experiments range from variety of trivial ones which are for school level to complicated ones which are meant for college students. However, it is nearly impossible to support a vast variety of experiments that can be performed using simple electronic circuits.

So, the blog intends to show how PSLab can be efficiently used for performing experiments which are otherwise not a part of the built-in experiments of PSLab. PSLab might have some limitations on its hardware, however in almost all types of experiments, it proves to be good enough.

  • Identifying the requirements for experiments

    • The user needs to identify the tools which are necessary for analysing the circuit in a given experiment. Oscilloscope would be essential for most experiments. The voltage & current sources might be useful if the circuit requires DC sources and similarly, the waveform generator would be essential if AC sources are needed. If the circuit involves the use and analysis of data of sensor, the sensor analysis tools might prove to be essential.
    • The circuit diagram of any given experiment gives a good idea of the requirements. In case, if the requirements are not satisfied due to the limitations of PSLab, then the user can try out alternate external features.
  • Using the features of PSLab

  • Using the oscilloscope
    • Oscilloscope can be used to visualise the voltage. The PSLab board has 3 channels marked CH1, CH2 and CH3. When connected to any point in the circuit, the voltages are displayed in the oscilloscope with respect to the corresponding channels.
    • The MIC channel can be if the input is taken from a microphone. It is necessary to connect the GND of the channels to the common ground of the circuit otherwise some unnecessary voltage might be added to the channels.

  • Using the voltage/current source
    • The voltage and current sources on board can be used for requirements within the range of +5V. The sources are named PV1, PV2, PV3 and PCS with V1, V2 and V3 standing for voltage sources and CS for current source. Each of the sources have their own dedicated ranges.
    • While using the sources, keep in mind that the power drawn from the PSLab board should be quite less than the power drawn by the board from the USB bus.
      • USB 3.0 – 4.5W roughly
      • USB 2.0 – 2.5W roughly
      • Micro USB (in phones) – 2W roughly
    • PSLab board draws a current of 140 mA when no other components are connected. So, it is advisable to limit the current drawn to less than 200 mA to ensure the safety of the device.
    • It is better to do a rough calculation of the power requirements in mind before utilising the sources otherwise attempting to draw excess power will damage the device.

  • Using the Waveform Generator
    • The waveform generator in PSLab is limited to 5 – 5000 Hz. This range is usually sufficient for most experiments. If the requirements are beyond this range, it is better to use an external function generator.
    • Both sine and square waves can be produced using the device. In addition, there is a feature to set the duty cycle in case of square waves.
  • Sensor Quick View and Sensor Data Logger
    • PSLab comes with the built in support for several plug and play sensors. The support for more sensors will be added in the future. If an experiment requires real time visualisation of sensor data, the Sensor Quick View option can be used whereas for recording the data for sensors for a period of time, the Sensor Data Logger can be used.
  • Analysing the Experiment

    • The oscilloscope is the most common tool for circuit analysis. The oscilloscope can sample data at very high frequencies (~250 kHz). The waveform at any point can be observed by connecting the channels of the oscilloscope in the manner mentioned above.
    • The oscilloscope has some features which will be essential like Trigger to stabilise the waveforms, XY Plot to plot characteristics graph of some devices, Fourier Transform of the Waveforms etc. The tools mentioned here are simple but highly useful.
    • For analysing the sensor data, the Sensor Quick View can be paused at any instant to get the data at any instant. Also, the logged data in Sensor Data Logger can be exported as a TXT/CSV file to keep a record of the data.
  • Additional Insight

    • The PSLab desktop app comes with the built-in support for the ipython console.
    • The desired quantities like voltages, currents, resistance, capacitance etc. can also be measured by using simple python commands through the ipython console.
    • A simple python script can be written to satisfy all the data requirements for the experiment. An example for the same is shown below.

This is script to produce two sine waves of 1 kHz and capturing & plotting the data.

from pylab import *
from PSL import sciencelab
I=sciencelab.connect()
I.set_gain('CH1', 2) # set input CH1 to +/-4V range
I.set_gain('CH2', 3) # set input CH2 to +/-4V range
I.set_sine1(1000) # generate 1kHz sine wave on output W1
I.set_sine2(1000) # generate 1kHz sine wave on output W2
#Connect W1 to CH1, and W2 to CH2. W1 can be attenuated using the manual amplitude knob on the PSlab
x,y1,y2 = I.capture2(1600,1.75,'CH1') 
plot(x,y1) #Plot of analog input CH1
plot(x,y2) #plot of analog input CH2
show()

 

References

Continue ReadingPerforming Custom Experiments with PSLab

Using Android Palette with Glide in Open Event Organizer Android App

Open Event Organizer is an Android Application for the Event Organizers and Entry Managers. The core feature of the App is to scan a QR code from the ticket to validate an attendee’s check in. Other features of the App are to display an overview of sales, ticket management and basic editing in the Event Details. Open Event API Server acts as a backend for this App. The App uses Navigation Drawer for navigation in the App. The side drawer contains menus, event name, event start date and event image in the header. Event name and date is shown just below the event image in a palette. For a better visibility Android Palette is used which extracts prominent colors from images. The App uses Glide to handle image loading hence GlidePalette library is used for palette generation which integrates Android Palette with Glide. I will be talking about the implementation of GlidePalette in the App in this blog.

The App uses Data Binding so the image URLs are directly passed to the XML views in the layouts and the image loading logic is implemented in the BindingAdapter class. The image loading code looks like:

GlideApp
   .with(imageView.getContext())
   .load(Uri.parse(url))
   ...
   .into(imageView);

 

So as to implement palette generation for event detail label, it has to be implemented with the event image loading. GlideApp takes request listener which implements methods on success and failure where palette can be generated using the bitmap loaded. With GlidePalette most of this part is covered in the library itself. It provides GlidePalette class which is a sub class of GlideApp request listener which is passed to the GlideApp using the method listener. In the App, BindingAdapter has a method named bindImageWithPalette which takes a view container, image url, a placeholder drawable and the ids of imageview and palette. The relevant code is:

@BindingAdapter(value = {"paletteImageUrl", "placeholder", "imageId", "paletteId"}, requireAll = false)
public static void bindImageWithPalette(View container, String url, Drawable drawable, int imageId, int paletteId) {
   ImageView imageView = (ImageView) container.findViewById(imageId);
   ViewGroup palette = (ViewGroup) container.findViewById(paletteId);

   if (TextUtils.isEmpty(url)) {
       if (drawable != null)
           imageView.setImageDrawable(drawable);
       palette.setBackgroundColor(container.getResources().getColor(R.color.grey_600));
       for (int i = 0; i < palette.getChildCount(); i++) {
           View child = palette.getChildAt(i);
           if (child instanceof TextView)
               ((TextView) child).setTextColor(Color.WHITE);
       }
       return;
   }
   GlidePalette<Drawable> glidePalette = GlidePalette.with(url)
       .use(GlidePalette.Profile.MUTED)
       .intoBackground(palette)
       .crossfade(true);

   for (int i = 0; i < palette.getChildCount(); i++) {
       View child = palette.getChildAt(i);
       if (child instanceof TextView)
           glidePalette
               .intoTextColor((TextView) child, GlidePalette.Swatch.TITLE_TEXT_COLOR);
   }
   setGlideImage(imageView, url, drawable, null, glidePalette);
}

 

The code is pretty obvious. The method checks passed URL for nullability. If null, it sets the placeholder drawable to the image view and default colors to the text views and the palette. The GlidePalette object is generated using the initializer method with which takes the image URL. The request is passed to the method setGlideImage which loads the image and passes the GlidePalette to the GlideApp as a listener. Accordingly, the palette is generated and the colors are set to the label and text views accordingly. The container view in the XML layout looks like:

<LinearLayout
   android:layout_width="match_parent"
   android:layout_height="wrap_content"
   android:orientation="vertical"
   app:paletteImageUrl="@{ event.largeImageUrl }"
   app:placeholder="@{ @drawable/header }"
   app:imageId="@{ R.id.image }"
   app:paletteId="@{ R.id.eventDetailPalette }">

 

Links:
1. Documentation for Glide Image Loading Library
2. GlidePalette Github Repository
3. Android Palette Official Documentation

Continue ReadingUsing Android Palette with Glide in Open Event Organizer Android App