Play Youtube Videos in SUSI.AI Android app

SUSI.AI Android app has many response functionalities ranging from giving simple ANSWER type responses to complex TABLE and MAP type responses. Although, even after all these useful response types there were some missing action types all related to media. SUSI.AI app was not capable of playing any kind of music or video.So, to do this  the largest source of videos in the world was thought and the functionality to play youtube videos in the app was added.

Since, the app now has two build flavors corresponding to the FDroid version and PlayStore version respectively it had to be considered that while adding the feature to play youtube videos any proprietary software was not included with the FDroid version. Google provides a youtube API that can be used to play videos inside the app only instead of passing an intent and playing the videos in the youtube app.

Steps to integrate youtube api in SUSI.AI

The first step was to create an environment variable that stores the API key, so that each developer that wants to test this functionality can just create an API key from the google API console and put it in the build.gradle file in the line below

def YOUTUBE_API_KEY = System.getenv(‘YOUTUBE_API_KEY’) ?: ‘”YOUR_API_KEY”‘    

In the build.gradle file the buildConfigfield parameter names API_KEY was created so that it can used whenever we need the API_KEY in the code. The buildConfigfield was declared for both the release and debug build types as :

release {
  minifyEnabled false
 proguardFiles getDefaultProguardFile(‘proguard-android.txt’), ‘proguard-rules.pro’
 buildConfigField “String”, ‘YOUTUBE_API_KEY’, YOUTUBE_API_KEY
}

debug {
  buildConfigField “String”, ‘YOUTUBE_API_KEY’, YOUTUBE_API_KEY
}

The second step is to catch the audio and video action type in the ParseSusiResponseHelper.kt file which was done by adding two constants “video_play” and “audio_play” in the Constant class. These actions were easily caught in the app as :

Constant.VIDEOPLAY -> try {
  identifier = susiResponse.answers[0].actions[i].identifier
} catch (e: Exception) {
  Timber.e(e)
}

Constant.AUDIOPLAY -> try {
  identifier = susiResponse.answers[0].actions[i].identifier
} catch (e: Exception) {
  Timber.e(e)
}

The third step involves making a ViewHolder class for the audio and video play actions. A simple layout was made for this viewholder which can display the thumbnail of the video and has a play button on the thumbnail, on clicking which plays the youtube video. Also, in the ChatFeedRecyclerAdapter we need to specify the action as Audio play and video play and then load the specific viewholder for the youtube videos whenever the response from the server for this particular action is fetched. The YoutubeViewHolder.java file describes the class that displays the thumbnail of the youtube video whenever a response arrives.

public void setPlayerView(ChatMessage model) {
  this.model = model;

  if (model != null) {
      try {
          videoId = model.getIdentifier();
          String img_url = “http://img.youtube.com/vi/” + videoId + “/0.jpg”;

          Picasso.with(itemView.getContext()).load(img_url).
                  placeholder(R.drawable.ic_play_circle_filled_white_24dp)
                  .into(playerView);
      } catch (Exception e) {
          Timber.e(e);
      }

  }

The above method shows how the thumbnail is set for a particular youtube video.

The fourth step is to pass the response from the server in the ChatPresenter to play the video asked by the user. This is  achieved by calling a function declared in the IChatView so that the youtube video will be played after fetching the response from the server :

if (psh.actionType == Constant.VIDEOPLAY || psh.actionType == Constant.AUDIOPLAY) {
  // Play youtube video
  identifier = psh.identifier
  chatView?.playVideo(identifier)
}

The fifth step is a bit complicated, as we know the app contains two flavors, one for the FDroid version that contains all the Open Source libraries and software and the second is the PlayStore version which may keep the proprietary software. Since, the youtube api is nothing but a piece of proprietary software we cannot include its implementation in the FDroid version and so different methods were devised to play the youtube videos in both versions. Since,  we use flavors and only want that Youtube API is compiled and included in the playstore version the youtube dependency was updated as :

playStoreImplementation files(‘libs/YouTubeAndroidPlayerApi.jar’)

This ensures that the library is only included in the playStore version. But on this step another problem is encountered, as now the code in both the versions will be different an encapsulation method was devised which enabled the app to keep separate code in both flavors.

To keep different code for both variants, in the src folder directories named fdroid and playStore were created and package name was added and then finally a java folder was created in each directory in which the separate code was kept for each flavor. An interface in the main directory was created which was used as a means to provide encapsulation so that it was implemented differently in each of the flavors to provide different functionality. The interface was created as :

interface IYoutubeVid {
  fun playYoutubeVid(videoId: String)
}

An object of the interface was made initialised in the ChatActivity and separate classes

named YoutubeVid.kt was made in each flavor which implemented the interface IYoutubeVid. So, now what happened was that depending on the build variant the YoutubeVid class of the particular flavor was called and the video would play according the sources that are available in that flavor.

In ChatActivity the following implementation was followed :

Declaration :

lateinit var youtubeVid: IYoutubeVid

Initialisation :

youtubeVid = YoutubeVid(this)

Call to the overridden function :

override fun playVideo(videoId: String) {
  Timber.d(videoId)
  youtubeVid.playYoutubeVid(videoId)
}

Final Output

References

  1. Youtube Android player API : https://developers.google.com/youtube/android/player/
  2. Building apps with product flavors – Tomoaki Imai https://medium.com/@tomoima525/building-multiple-regions-apps-with-product-flavors-good-and-bad-fe33e3c31060
  3. Kotlin style guide : https://android.github.io/kotlin-guides/style.html

 

Continue ReadingPlay Youtube Videos in SUSI.AI Android app

Protected Skills for SUSI.AI Bot Wizard

The first version of SUSI AI web bot plugin is working like a protected skill. A SUSI.AI skill can be created by someone with minimum base user role USER, or in other words, anyone who is logged in.  Anyone can see this skill or edit it. This is not the case with a protected skill or bot. If a skill is protected it becomes a personal bot and then it can only be used by the SUSI AI web client (chatbot) created by that user. Also, only the person who created this skill will be able to edit it or delete it. This skill won’t be listed with all the other public skills on skills.susi.ai.

How is a skill made private?

To make a skill private, a new parameter is added to SusiSkill.java file. This is a boolean called protected. If this parameter is true (Yes) then the skill is protected else if the parameter is false (No) then the skill is not protected. To add protected parameter, we add the following code to SusiSkill.java:

private Boolean protectedSkill;
boolean protectedSkill = false;

You can see that protectedSkill is a boolean with initial value false.
We need to read the value of this parameter in skill code. This is done by the following code:

if (line.startsWith("::protected"&& (thenpos = line.indexOf(' ')) > 0) {
  if (line.substring(thenpos+ 1).trim().equalsIgnoreCase("yes")) protectedSkill=true;
  json.put("protected",protectedSkill);
}

As you can see that the value of protected is read from the skill code. If it’s ‘Yes’ then protectedSkill is set as true and if it’s ‘No’ then protectedSkill is set as false.
If no protected parameter is given in the skill code, its value remains false.

How to add only protected skills from bot wizard?

Now that protected parameter has been added to the server, we need to add this parameter in the skill code but it should be ‘Yes’ if the user is creating a bot and ‘No’ if user is creating a skill using skill creator. In order to do this, we simply check if the user is creating a bot wizard. This can be done by passing a props in the CreateSkill.js file when the skill creator is being used to create a protected skill. Next, we can determine whether the user is using bot wizard or not simply by an if else statement. The following code will demonstrate it:

if (this.props.botBuilder) {
 code = '::protected Yes\n' + code;
else {
 code = '::protected No\n' + code;
}

References:

Continue ReadingProtected Skills for SUSI.AI Bot Wizard

Offline support for Open Event Android with ROOM

So until now we were fetching data from the server and directly displaying it to the user in the Open Event Android app. There are several problems in this approach if the user changes the fragment and then returns to the same fragment he will have to fetch the data again, valuable network gets wasted. There is also no offline support. So we decided to introduce a local database in the app. ROOM was without a doubt our first choice

What is ROOM?

According to the official documentation,

The Room persistence library provides an abstraction layer over SQLite to allow fluent database access while harnessing the full power of SQLite.

The library helps you create a cache of your app’s data on a device that’s running your app. This cache, which serves as your app’s single source of truth, allows users to view a consistent copy of key information within your app, regardless of whether users have an internet connection.

Let’s get started

Integration of the ROOM database majorly consists of 3 steps

1 Create an entity

2 Create a DAO

3 Create a Database

That’s it, you are done !

Create the entity

There are just 2 requirements in order to make a model an entity

First use @Entity annotation on the model class to make it an entity. Secondly you need at least one field with @PrimaryKey annotation.

This entity class represents a table in database and all its fields are the columns of the table.

@Type("event")
@JsonNaming(PropertyNamingStrategy.KebabCaseStrategy::class)
@Entity
data class Event(
@Id(LongIdHandler::class)
@PrimaryKey
val id: Long,
val name: String,
val identifier: String,
val startsAt: String)

Create DAO

Data Access Object or DAO for short are used to tell the database how to to put the data.

We can use the following annotations to perform simple SQL queries in ROOM

@Insert, @Update, @Delete for proper actions: inserting, updating and deleting records

@Query for creating queries — we can make select from the database

@Dao
interface EventDao {
@Insert(onConflict = REPLACE)
fun insertEvents(events: List<Event>)

@Insert(onConflict = REPLACE)
fun insertEvent(event: Event)

@Query("DELETE FROM Event")
fun deleteAll()

@Query("SELECT * from Event ORDER BY startsAt DESC")
fun getAllEvents(): Flowable<List<Event>>

@Query("SELECT * from Event WHERE id = :id")
fun getEvent(id: Long): Flowable<Event>
}

Create the Database

We need to create a public abstract class that extends RoomDatabase

After that annotate the class to be a Room database, declare the entities that belong in the database and set the version number. Listing the entities will create tables in the database.

Define the DAOs that work with the database. Make the database a singleton to prevent having multiple instances of the database opened at the same time. A lot has been said let’s have a look at the code now.

@Database(entities = [Event::class, User::class], version = 1)
abstract class OpenEventDatabase : RoomDatabase() {

abstract fun eventDao(): EventDao

abstract fun userDao(): UserDao
}
Room.databaseBuilder(androidApplication(),
OpenEventDatabase::class.java, "open_event_database")
.fallbackToDestructiveMigration()
.build()

The important thing here is that all operations must be done in the background thread. You can do it by using AsyncTask, Handler, RxJava or anything else.

Resources

  1. Room Official Documentation : https://developer.android.com/topic/libraries/architecture/room
  2. Google Code Lab :https://codelabs.developers.google.com/codelabs/android-room-with-a-view/#0
Continue ReadingOffline support for Open Event Android with ROOM

Integrating Gravatar and Anonymizing Email Address in Feedback Section

SUSI skills are having a very nice feedback system that allows the user to rate skills from 1-star to 5-star and showing ratings in skills screens. SUSI also allow the user to post feedback about skills and display them. You can check out how posting feedback implemented here and how displaying feedback feature implemented here. To enhance the user experience, we are adding user gravatar in the feedback section and to respect user privacy, we are anonymizing the user email displayed in the feedback section. In this post, we will see how these features implemented in SUSI iOS.

Integrating Gravatar –

We are showing gravatar of the user before feedback. Gravatar is a service for providing globally-unique avatars. We are using user email address to get the gravatar. The most basic gravatar image request URL looks like this:

https://www.gravatar.com/avatar/HASH

where HASH is replaced with the calculated hash for the specific email address we are requesting. We are using the MD5 hash function to hash the user’s email address.

The MD5 hashing algorithm is a one-way cryptographic function that accepts a message of any length as input and returns as output a fixed-length digest value to be used for authenticating the original message.

In SUSI iOS, we have MD5Digest.swift file that gives the hash value of email string. We are using the following method to set gravatar:

if let userEmail = feedback?.email {
setGravatar(from: userEmail)
}
func setGravatar(from emailString: String) {
let baseGravatarURL = "https://www.gravatar.com/avatar/"
let emailMD5 = emailString.utf8.md5.rawValue
let imageString = baseGravatarURL + emailMD5 + ".jpg"
let imageURL = URL(string: imageString)
gravatarImageView.kf.setImage(with: imageURL)
}

Anonymizing User’s Email Address –

Before the implementation of this feature, the user’s full email address was displayed in the feedback section and see all review screen. To respect the privacy of the user, we are now only showing user email until the `@` sign.

In Feedback object, we have the email address string that we modify to show until `@` sign by following way:

if let userEmail = feedback?.email, let emailIndex = userEmail.range(of: "@")?.upperBound {
userEmailLabel.text = String(userEmail.prefix(upTo: emailIndex)) + "..."
}

 

Final Output –

Resources –

  1. Post feedback for SUSI Skills in SUSI iOS
  2. Displaying Skills Feedback on SUSI iOS
  3. What is MD5?
Continue ReadingIntegrating Gravatar and Anonymizing Email Address in Feedback Section

Voice interfaces

Voice Interfaces

(This blogpost is based on a meetup that was held about voice interfaces).

Evolution of User Interfaces

From keyboards, to GUIs, to Touch, and now Voice!

First we used punch cards to talk to machines, it was very costly and was used mostly as research, it was our first experience with machines.

Perforated cards.

At the beginning of computing we found the first large computers, huge and noisy boxes and complexity of management that only universities and large companies could afford to drive.

How it worked is that you wrote your code and used a machine that punched holes through it and the machine would parse this data, convert it to its own internal representation and start a job for it, these machines were new and many people wanted to use it, so there were long queues in order to use these machines, it was very expensive, and difficult to use, there was no user interface design at all. And they were not designed for the public, so they didn’t have that in mind.

Fortunately, it was evolving and access to them was increasingly common although they were still products mainly for work issues. They were years in which some punched cards were used to enter the different orders, it was not comfortable at all.

But Xerox arrived and its graphic interface. An important leap when interacting with a computer because, so far, only through command lines could and should be an expert. Unfortunately few remember the people of Xerox Parc and their contribution to the world of information technology. Most think of Apple and its Macintosh or Microsoft and Windows 1.0 as the “inventors” of graphical interfaces. But no, it was not like that. What we can not deny to Apple and Microsoft, especially to the former, is their weight in the creation of personal computers and their democratization so that we could all have one.

Then comes the command line

Command Line (or CLI) It is an Interface, a method to manipulate written instructions to the underlying program below. This interface is customary to call System console or command console. It interacts with the information in the simplest way possible, without graphics or anything else than the raw text. The orders are written as lines of text (hence the name), and, if the programs respond, they usually do so by putting information in the following lines.

Almost any program can be designed to offer the user some kind of CLI. For example, almost all PC games in the first person have a built-in command line interface, used for diagnostic and administrative tasks. As a primary work tool, the command lines are mainly used by programmers and system administrators, especially in Unix-based operating systems; in scientific and engineering environments; and by a smaller subset of advanced home users.

To use a command line, you just need the keyboard, you type commands and the computer replies to those queries, it is still widely used today because it is one of the easiest interfaces to make and the most powerful.

CLI implementations

Programs that use CLI to interact with the kernel of an operating system are often called shell or shell interpreters. Some examples are the various Unix shells (sh, ksh, csh, tcsh, bash, etc.), the historical CP / M, and the DOS command.com, the latter two strongly based on the RSTS and RSX CLIs DEC. Microsoft’s next operating system, Windows Vista, will accept a new command-line interface called MSH (Microsoft Shell, codename Monad), which hopes to combine features of traditional Unix shells with its .NET object-oriented framework. .

Some applications provide both a CLI and a GUI. An example is the AutoCAD CAD program. The scientific / engineering package of numerical computing Matlab does not provide GUI for some calculations, but the CLI can perform any calculation. The Rhinoceros 3D three-dimensional modeling program (used to design the boxes of most portable phones, as well as thousands of other industrial products) provides a CLI (whose language, by the way, is different from the Rhino script language) . In some computing environments, such as the Smalltalk or Oberon user interface, most of the text that appears on the screen can be used to give commands.

The three-dimensional games or simulators for PC usually includes a command line interface, sometimes as the only means to perform certain tasks. Quake, Unreal Tournament or Battlefield are just some examples. Generally in these environments the commands start with a “/” (slash).

Example

The command “list files”, under various programs:

 

Program

Comand

Type of program

CMD        

Matlab

TACL

Quake

Dir

Dir

FILEINFO

/dir

Windows Shell

Matrix processing

Guardian Shell

PC Game

                

        

                                        

Graphical User Interface

Then the GUI was made, (Graphical User Interface), now people who had no idea about computer science, could use a computer, in a GUI everything is more natural, and it is not needed to understand the internals of a computer in order to use it.

After that comes the touchscreen which was popularized by Apple.

                                            Natural

╔══════════╦════════════════════════╦═══════════════════════╗
║ Personal   ║  Touch   Voice Interface    ║ ■■■■■■■■■■■■■■■■■■■■  ║
║ Common     ║  Graphical (Mouse;Keyboard) ║ ■■■■■■■■■■■■            ║
║ Pro        ║  Command line               ║ ■■■■                      ║
║ Research   ║  Punch Cards                ║ ■                         ║
╚══════════╩════════════════════════╩═══════════════════════╝

                                            Mechanic

The interface emerges as evolution of the command line interfaces that were used in the first operating systems and is basic for a graphical environment.

Windows desktop environments, GNU / Linux X-Window or Mac OS X, Aqua are some of the best-known examples of graphical user interface. For the user to interact and establish a more comfortable and intuitive contact with the computer, the graphical user interface has become common use. An interface is the device that allows communication between two systems that do not speak in the same language. By interface is defined the set of connections and devices that facilitates communication between two systems and also to the visible side of the programs, as presented to users to interact with the computer.

It implies the presence of a monitor or screen that through a series of menus and icons represent the options that the user can choose within the system.

The characteristics of an efficient interface could be:

– Fixed and permanent representation of a specific context of action.

– Ease of understanding, learning and use.

– The object of interest must be easy to identify.

– Ergonomic design through the establishment of menus, toolbars and easily accessible icons.

– Interactions based on manual actions on elements of visual or auditory code and on menu selections with syntax and order.

IGU is a user interface in which a person interacts with digital information through a graphical simulation environment. This system of interaction is called WYSIWYG (What you see is what you get) and in it, the objects, icons of the graphic interface behave as metaphors of the action and the tasks that the user must perform.

A voice interface is just the next step, it is a way to interact with the machine, through voice commands, you say commands out loud and the machine attempts to interpret them, and based on this information it executes a task, this is indeed for humans more natural. It will be a challenge for machines but that’s what SUSI.AI is, we do have another contenders and those are Cortana, Siri, Alexa and Google assistant.

Alexa button:

In the Amazon Alexa one can find many game skills, like Jeopardy, or CYOA games, but Amazon wants to try out these colourful buttons

Alexa recently released “Alexa buttons”, which you can hook with an Alexa assistance, which can change colours and it can be pushed, in order to play “who can push the button the fastest” games and provide another input to alexa, also it is possible to create experiences by controlling the lights of the buttons, the speed of the transition or gradient into other colours, etc.

Storyline

Storyline is a service that provides a web user interface where you can create Amazon Alexa skills very intuitively, Similar to node red, you click and drag boxes in configurations in order to give instructions on how to add depending on conditions, for example if the user asks “Where is the nearest bar?”, you can expect the question and set a directive accordingly.

There are strong limitations in respect of the freedom you don’t have if you use the API directly but there are also big advantages of Storyline is that you might be able to quickly create mockups in a very short time which can be very useful in hackathons, or in sketches of actual skills. You can also create an alexa skill from a spreadsheet from Google sheets.

Snips:

Snips is a service that offers you the tools to create a voice assistant, completely by yourself without the need for it to be on the cloud, this means there is no privacy concerns for those who advocate privacy, however you must train it with data yourself, and you assume the costs of collecting and organizing the data, this is helpful for companies with private data that want their own voice assistants and they want to not be monitored by Amazon, Google, Apple, Microsoft, or any of the giant tech companies.

Snips is targeted for makers as well, people that want to create their own personal assistant without being limited by most of the restrictions of the other voice assistants.

Sources:

  1. http://ucipedia.uci.cu/index.php/L%C3%ADnea_de_comandos1
Continue ReadingVoice interfaces

Getting results for Multi-word Query and showing limited results in Knowledge Graph

In Susper knowledge graph,we were unable to get results for multi word query from Wikipedia API and also we were unable to decide how much information should be shown in knowledge graph which was retrieved from Wikipedia API.For example for a query donald trump we do not get information (https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exintro=&explaintext=&titles=donald%20trump) . Also for searching for any query like india (https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exintro=&explaintext=&titles=india) we get a lot of information. Earlier we used Angular slice pipe to display only 600 characters in knowledge graph (Infobox) but almost for all queries the sentences were terminated before it was completed. In this blog, I will describe how we solved both problems in knowledge graph.

Getting results for multi word query from Wikipedia API:

On searching a lot we found that the results were present on Wikipedia for multi word queries but these queries must have its starting letters as capital letters. For example we do not get results for queries such as donald trump

i.e https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exintro=&explaintext=&titles=donald%20trump

but when we make a query for Donald Trump  

i.e

https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exintro=&explaintext=&titles=Donald%20Trump  

But since in most of the search queries users use small letters, we need to convert the query such that each word starts with a capital letter. Since the Knowledge Graph has been implemented using ngrx pattern, this logic was easily implemented in knowledge effects. For this we will be selecting each word in query by using regular expression and then we will use toUppercase() and toLowerCase() methods of javascript and will capitalize every word of the query.

toTitleCase(str) {
    return str.replace(
        /\w\S*/g,
        function(txt) {
            return txt.charAt(0).toUpperCase() + txt.substr(1).toLowerCase();
        }
    );
}

this.knowledgeservice.getSearchResults(this.toTitleCase(querypay.query))

 

Like this we now get results for almost all queries.

Limiting information in Knowledge Graph without terminating the sentences:

To solve this issue, we decided to show only four lines of data retrieved from Wikipedia API

For this we have implemented a getPosition function which will take three parameters a string, a substring and a index. Here string is the whole string from which the function will return the index of index position substring.

getPosition(string, subString, index) {
    return string.split(subString, index).join(subString).length;
 }
this.description = this.description.slice(0, this.getPosition(this.description, '.', 4) + 1);

 

Here, We get the index of 4th ‘.’ full stop by getPosition() function and then pass it to javascript slice method to slice the string upto that position.

Using this we limited results to 4 lines without terminating each line in middle.

References

1.W3Schools Regular Expression: https://www.w3schools.com/jsref/jsref_obj_regexp.asp

2.Javascript Slice method: https://www.w3schools.com/jsref/jsref_slice_array.asp

3.Wikipedia API: https://www.mediawiki.org/wiki/API:Main_page

Continue ReadingGetting results for Multi-word Query and showing limited results in Knowledge Graph

Adding Online Payment Support in Open Event Frontend via PayPal

Open Event Frontend involves ticketing system which supports both paid and free tickets. To buy a paid ticket Open Event provides several options such as debit card, credit card, cheque, bank transfer and onsite payments. So to add support for debit and credit card payments Open Event uses Paypal checkout as one of the options. Using paypal checkout screen users can enter their card details and pay for their ticket or they can use their paypal wallet money to pay for their tickets.

Given below are some steps which are to be followed for successfully charging a user for ticket using his/her card.

  • We create an application on paypal developer dashboard to receive client id and secret key.
  • We set these keys in admin dashboard of open event and then while checkout we use these keys to render checkout screen.
  • After clicking checkout button a request is sent to create-paypal-payment endpoint of open event server to create a paypal token which is used in checkout procedure.
  • After user’s verification paypal generates a payment id is which is used by open event frontend to charge the user for stipulated amount.
  • We send this token to open event server which processes the token and charge the user.
  • We get error or success message from open event server as per the process outcome.

To render the paypal checkout elements we use paypal checkout library provided by npm. Paypal button is rendered using Button.render method of paypal checkout library. Code snippet is given below.

// app/components/paypal-button.js

paypal.Button.render({
   env: 'sandbox',

   commit: true,

   style: {
     label : 'pay',
     size  : 'medium', // tiny, small, medium
     color : 'gold', // orange, blue, silver
     shape : 'pill'    // pill, rect
   },

   payment() {
   // this is used to obtain paypal token to initialize payment process 
   },

   onAuthorize(data) {
     // this callback will be for authorizing the payments
    }

 }, this.elementId);

 

After button is rendered next step is to obtain a payment token from create-paypal-payment endpoint of open event server. For this we use the payment() callback of paypal-checkout. Code snippet for payment callback method is given below:

// app/components/paypal-button.js

let createPayload = {
      'data': {
        'attributes': {
          'return-url' : `${window.location.origin}/orders/${order.identifier}/placed`,
          'cancel-url' : `${window.location.origin}/orders/${order.identifier}/placed`
        },
        'type': 'paypal-payment'
      }
    };
paypal.Button.render({
     //Button attributes

      payment() {
        return loader.post(`orders/${order.identifier}/create-paypal-payment`, createPayload)
          .then(res => {
            return res.payment_id;
          });
      },

      onAuthorize(data) {
        // this callback will be for authorizing the payments
      }

    }, this.elementId);

 

After getting the token payment screen is initialized and user is asked to enter his/her credentials. This process is handled by paypal servers. After user verifies his/her payment paypal generates a paymentId and a payerId and sends it back to open event. After the payment authorization onAuthorize() method of paypal is called and payment is further processed in this callback method. Payment ID and payer Id received from paypal is sent to charge endpoint of open event server to charge the user. After receiving success or failure message from paypal proper message is displayed to users and their order is confirmed or cancelled respectively. Code snippet for onAuthorize is given below:

// app/components/paypal-button.js

onAuthorize(data) {
        // this callback will be for authorizing the payments
        let chargePayload = {
          'data': {
            'attributes': {
              'stripe'            : null,
              'paypal_payer_id'   : data.payerID,
              'paypal_payment_id' : data.paymentID
            },
            'type': 'charge'
          }
        };
        let config = {
          skipDataTransform: true
        };
        chargePayload = JSON.stringify(chargePayload);
        return loader.post(`orders/${order.identifier}/charge`, chargePayload, config)
          .then(charge => {
            if (charge.data.attributes.status) {
              notify.success(charge.data.attributes.message);
              router.transitionTo('orders.view', order.identifier);
            } else {
              notify.error(charge.data.attributes.message);
            }
          });
      }

 

Full code can be seen here.

In this way we achieve the functionality of adding paypal payment support in open event frontend. Please follow the links below for further clarification and detailed overview.

Resources:

Continue ReadingAdding Online Payment Support in Open Event Frontend via PayPal

Displaying All Providers and All Authors separately in Susper Angular App

In Susper results are retrieved from a lots of providers and authors. Displaying all authors’ and providers’ list on result page make appearance of result section odd and more complex. In this blog, I will describe how we have separated the list of all providers and authors from main result page and displayed it on a lightbox.

Steps:

  • Making a black overlay for lightbox:

First we of all to display the results on a lightbox,we to have a black overlay for this we need a div element and inside it we need a closing button.

Clicking on the black overlay should toggle status of light box therefore we have binded click event with showMore() method.

<div id="fade_analytics" class="black_overlay" (click)='showMore()'><a class="closeX" id="closeX">&#10006;</a></div>
  • Creating a white content to display all Providers or Authors:

Now we need a white block to display all Providers or all Authors so after creating a black overlay in background we need a white foreground.

Therefore I have implemented a div element and put all the chart elements inside the div element so that it will be displayed on a white foreground on a translucent black background.

<div id="light_analytics" class="white_content_analytics">
<div class="show_more"><h2 class="heading"><b>Analytics</b></h2><div id="filtersearch" *ngFor="let nav of navigation$|async; let i = index"><div class="card-container" id="relate" *ngIf="nav.displayname =='Provider' || nav.displayname =='Authors'"><h3  class="related-searches" *ngIf="i===currentelement"><b>Top {{nav.displayname}}<span *ngIf="nav.displayname.slice(-1)!=='s'">s</span></b></h3>
<div style="display: block;cursor:pointer;cursor: hand;" *ngFor="let element of  ((i != currentelement)? (nav.elements | slice:0:0) :nav.elements)">
</div></div></div></div>

The code to display All providers and All authors are contained inside the   white block and the logic to display the each section is also implemented.

  • Showing only 50 results on result page and implementing a button to open all providers or authors in light box:

Currently we are displaying the top five providers and authors and we provide a button to show more providers and authors which then displays 50 more results and then we display an interactive button that toggles display of all authors and providers on a lightbox. To display 5 and 50 results we use slice pipe in angular. The button Show All Providers/Authors is binded with showMore() function which toggles lightbox.

<div style="display: block;cursor:pointer;cursor: hand;" *ngFor="let element of ((i != selectedelement)? (nav.elements | slice:0:5) :(nav.elements | slice:0:50) )">
<a [routerLink]="['/search']" [queryParams]="changequerylook(element.modifier,element)" *ngIf="element.name" href="#" [style.color]="themeService.linkColor">
{{element.name}} <span class="badge badge-info">{{element.count}}</span>
</a><a  *ngIf="!element.name" style="color: black;cursor: default" href="#" [style.color]="themeService.linkColor">
Undefined Author <span class="badge badge-info">{{element.count}}</span></a></div>
<button class="showMoreBtn" (click)="showMore()" *ngIf="selectedelement===i" [style.color]="themeService.linkColor">Show All {{nav.displayname}}<span *ngIf="nav.displayname.slice(-1)!=='s'">s</span></button>

 

  • Implementing the logic to display all results on white content:

Now we need to implement the logic to toggle lightbox to display All Providers and All Authors. This logic is implemented mainly in showMore() function which toggles the lightbox containing the All Providers and All Authors

showMore() { this.selectedelement = -1;
if (!this.showMoreAnalytics) { this.showMoreAnalytics = true; document.getElementById('light_analytics').style.display = 'block'; document.getElementById('fade_analytics').style.display = 'block'; } else { this.showMoreAnalytics = false;
document.getElementById('light_analytics').style.display = 'none';
document.getElementById('fade_analytics').style.display = 'none';}}

Here we use showMoreAnalytics variable to check whether the user has asked to show all Providers or Authors. We also make selectedelement variable to -1 to hide the long list of 50 Providers or Authors when Lightbox displays all the Providers or Author. Finally we use document object in javascript and select the DOM elements by their id and set their display property this shows and hides the lightbox on result page.

Resources

1.YaCy Advanced Search Parameters

http://www.yacy-websuche.de/wiki/index.php/En:SearchParameters

2.W3School lightbox

https://www.w3schools.com/howto/howto_js_lightbox.asp

 

Continue ReadingDisplaying All Providers and All Authors separately in Susper Angular App

Query check using RegExp

Specific results can be obtained in loklak using different combinations of type associated with query (e.g. from:user, @user, #query). For identifying the requirement of user through query and maintaining the flow of API requests to the loklak server, RegExp is being used. Through this blog post, I’ll be discussing about its use in the project.

Using RegExp in Services

Patterns for all possible query types has already been defined in reg-exp typescript file inside utils/ folder. We would need to import these expressions to check type of query anywhere inside the codebase.

The query (q) arguments of search endpoints of api.loklak depends on type associated with query i.e. argument for from:user would be different from that of @user. To keep check on the same, I have used RegExp.

if ( hashtagRegExp.exec(query) !== null ) {
       // Check for hashtag query
       jsonpUrl += ‘&q=%23 + hashtagRegExp.
       exec(query)[1] +  + hashtagRegExp.exec(query)[0];
   } else if ( fromRegExp.exec(query) !== null ) {
       // Check for from user query
       jsonpUrl += ‘&q=from%3A’ + fromRegExp.exec(query)[1];
   } else if ( mentionRegExp.exec(query) !== null ) {
       // Check for mention query
       jsonpUrl += ‘&q=%40 + mentionRegExp.exec(query)[1];
   } else if ( nearRegExp.exec(query) !== null ) {
       // Check for near query
       jsonpUrl += ‘&q=near%3A’ + nearRegExp.exec(query)[1];
   } else {
       // for other queries
       jsonpUrl += ‘&q=’ + query;
   }

 

A similar architecture has been used in UserService and SuggestService.

Using RegExp in info-box

A new feature has been added in loklak to provide RSS and JSON feed through api.loklak.org. It requires a query check which is then passed through anchor tag link to provide RSS and JSON feed based on the query and type associated with it. It is being included under info-box and the query check is done in ngOnChanges() which means it will keep updating itself with updating query.

   this.store.select(fromRoot.getQuery).subscribe(query => this.stringQuery = query.displayString);
   this.parseApiResponseData();
   if ( hashtagRegExp.exec(this.stringQuery) !== null ) {
       // Check for hashtag this.stringQuery
       this.queryString = %23 + hashtagRegExp.
       exec(this.stringQuery)[1] +  + hashtagRegExp.exec(this.stringQuery)[0];
   } else if ( fromRegExp.exec(this.stringQuery) !== null ) {
       // Check for from user this.stringQuery
       this.queryString = from%3A + fromRegExp.exec(this.stringQuery)[1];
   } else if ( mentionRegExp.exec(this.stringQuery) !== null ) {
       // Check for mention this.stringQuery
       this.queryString = %40 + mentionRegExp.exec(this.stringQuery)[1];
   } else {
       // for other queries
       this.queryString = this.stringQuery;
   }
}

 

A similar architecture is followed throughout the project in different features implemented so far. It would not be easy to provide each of those implementation in one blog. For actual implementations, please go through the codebase.

Resources

Continue ReadingQuery check using RegExp

Adding New tests for Knowledge Service

Testing is done to test our application by executing some functions by creating instances of corresponding classes,executing functions and checking the actual behaviour of our app with expected result. The tools and frameworks used in Angular are Jasmine and Karma. In this blog, I will describe about how I have implemented tests for Newly Added Knowledge API service that helped us to increase overall code coverage by 1.05% in Susper .

Adding tests for Knowledge API:

We need to check the API that whether it is functioning or not and this can be done by using a mocked response (hardcoded response) for any query and then comparing this with the received response from the API. This will help us to check proper functioning of our API.

This is a common practice in Angular and to achieve this we will be using some dependencies like MockBackend, MockConnection, BaseRequestOptions provided by Angular.

import { MockBackend, MockConnection } from '@angular/http/testing';
import { Http, Jsonp, BaseRequestOptions, RequestMethod, Response, ResponseOptions, HttpModule, JsonpModule } from '@angular/http';

 

We will also need to define a Mock response, here I have used hard coded response for query India. Here is the mocked response.

export const MockKnowledgeApi = {
    results: [
        {"batchcomplete": "", "query":
        {"normalized":
        [{"from": "india", "to": "India"}], "pages":
        {"14533": {"pageid": 14533, "ns": 0, "title": "India", "extract": `India (IAST: Bh\u0101rat), also called the Republic of India (IAST: Bh\u0101rat Ga\u1e47ar\u0101jya), is a country in South Asia.
   }}}} ], MaxHits : 5 };

 

Now we will use a Mock constant mock_Http_provider for http and will inject instances of MockBackend and BaseRequestOptions.

const mockHttp_provider = {
  provide: Http,
  deps: [MockBackend, BaseRequestOptions],
  useFactory: (backend: MockBackend, options: BaseRequestOptions) => { return new Http(backend, options); } };

 

Now we need to add all the services and dependencies which we will be using in providers and we will inject the instances of Knowledgeapi Service and MockBackend in beforeEach function.

beforeEach(inject([KnowledgeapiService, MockBackend], (knowledgeService: KnowledgeapiService, mockBackend: MockBackend) => { service = knowledgeService;
    backend = mockBackend;}));

 

Now we will use the same query for which we have created the mocked response and we will be checking the response of from our API.

const searchquery = 'india';
connection.mockRespond(new Response(options));
      expect(connection.request.method).toEqual(RequestMethod.Get);
      expect(connection.request.url).toBe(   `https://en.wikipedia.org/w/api.php?&origin=*&format=json&action=query&prop=extracts&exintro=&explaintext=&` +
                    `titles=${searchquery}`);});
service.getSearchResults(searchquery).subscribe((res) => {
      expect(res).toEqual(MockKnowledgeApi);});

 

This will check the working of our API and if it is working then our test case will pass.

Like this we have implemented the tests for Knowledgeapi Service which helped us to test our API and increase overall code coverage significantly.

Resources

  1. Testing in Angular:https://angular.io/guide/testing
  2. Testing by Mockbackend:https://angular.io/api/http/testing/MockBackend
  3. Using MockBackend to simulate response: https://codecraft.tv/courses/angular/unit-testing/http-and-jsonp/#_using_the_code_mockbackend_code_to_simulate_a_response
Continue ReadingAdding New tests for Knowledge Service