Demystifying a travis.yml file

In this blog post, we are going to demystify a travis.yml file. For reference, we will be using the travis configuration used in the  SUSI skill CMS project. But before delving into it, I would like to just give a brief introduction about Continuous Integration (CI) and Travis CI.

What is CI and Travis ?

Continuous Integration represents the practice of integrating a piece of work as early as possible instead of later so that one can receive immediate and frequent feedback on things that are wrong.

“ Continuous Integration is the practice of merging all developer working copies to a shared mainline several times a day.

-Wikipedia

Travis CI is a hosted continuous integration platform that is free for all open source projects hosted on Github. With just a file called .travis.yml containing some information about our project, we can trigger automated builds with every change to our code base in the master branch, other branches or even a pull request.

sudo: required
dist: trusty
language: node_js

node_js:
  - 6

script:
  - npm test

after_success:
 - bash ./pr_deploy.sh
 - bash ./deploy.sh

cache:
  directories:
    - node_modules

branches:
  only:
    - master

Travis configuration file of susi_skill_cms

Part-wise explanation of the file

  • When specifying sudo: required, Travis CI runs each build in an isolated Google Compute Engine virtual machine that offer a vanilla build environment for every build. This has the advantage that no state persists between builds, offering a clean slate and making sure that the tests run in an environment built from scratch. Builds have access to a variety of services for data storage and messaging, and can install anything that’s required for them to run.
  • The keyword dist represents the virtualization environment that is being used. trusty here refers to the distribution of the Linux environment used.
  • The keyword language represents the programming language that is be used for the project. The language used for the project is nodeJs.
  • Then follows the version details of node_js that is to be used. The node_js version used is 6.
  • This is the step where Travis runs the test script. Unless otherwise specified, it runs the default for the set language. In the case of node, it does node_js. The script stands for the build script that would be executed during the build process. The default build script for nodeJs is npm test. The result of execution of npm test can be found from the package.json file. It executes the npm run lint && react-scripts test –env=jsdom command, which is responsible for checking the linting issues and runs various unit and snapshot tests. We can add multiple lines of command to be executed.
  • The after_success block runs after the entire script is done. It’s the last step in the normal build process has been executed successfully. It has 2 commands to be executed –
    • bash ./pr_deploy.sh : Responsible for making a surge deployment of each PR
    • bash ./deploy.sh : Responsible for making a deployment to master branch
  • Travis CI can cache content that does not often change, to speed up the build process. The cache setting caches the node_modules directory, without the need to install the dependencies repeatedly.
  • We can specify certain branches to run, either by specifying a white list (using only keyword) or a black list (using except keyword). Here, the configuration mentions to run the build only for PRs to master branch.

Resources

Continue ReadingDemystifying a travis.yml file

Adding Servlet for SUSI Service

Whenever we ask anything to SUSI, we get a intelligent reply. The API endpoints which clients uses to give the reply to users is http://api.susi.ai/susi/chat.json, and this API endpoint is defined in SUSIService.java. In this blog post I will explain how any query is processed by SUSI Server and the output is sent to clients.

public class SusiService extends AbstractAPIHandler implements APIHandler

This is a public class and just like every other servlet in SUSI Server, this extends AbstractAPIHandler class, which provides us many options including AAA related features.

    public String getAPIPath() {
           return "/susi/chat.json";
    }

The function above defines the endpoint of the servlet. In this case it is  “/susi/chat.json”. The final API Endpoint will be API http://api.susi.ai/susi/chat.json.

This endpoint accepts 6 parameters in GET request . “q” is the query “parameter”, “timezoneOffset” and “language” parameters are for giving user a reply according to his local time and local language, “latitude” and “longitude” are used for getting user’s location.

      String q = post.get("q", "").trim();
        int count = post.get("count", 1);
        int timezoneOffset = post.get("timezoneOffset", 0); // minutes, i.e. -60
        double latitude = post.get("latitude", Double.NaN); // i.e. 8.68
        double longitude = post.get("longitude", Double.NaN); // i.e. 50.11
        String language = post.get("language", "en"); // ISO 639-1

After getting all the parameters we do a database update of the skill data. This is done using DAO.susi.observe() function.

Then SUSI checks that whether the reply was to be given from a etherpad dream, so we check if we are dreaming something.

if (etherpad_dream != null && etherpad_dream.length() != 0) {
    String padurl = etherpadUrlstub + "/api/1/getText?apikey=" + etherpadApikey + "&padID=$query$";

If SUSI is dreaming something then we call the etherpad API with the API key and padID.

After we get the response from the etherpad API we parse the JSON and store the text from the skill in a local variable.

JSONObject json = new JSONObject(serviceResponse);
String text = json.getJSONObject("data").getString("text");

After that, we fill an empty SUSI mind with the dream. SUSI susi_memory_dir is a folder named as “susi” in the data folder, present at the server itself.

SusiMind dream = new SusiMind(DAO.susi_memory_dir); 

We need the memory directory here to get a share on the memory of previous dialogues, otherwise, we cannot test call-back questions.

JSONObject rules = SusiSkill.readEzDSkill(new BufferedReader(new InputStreamReader(new ByteArrayInputStream(text.getBytes(StandardCharsets.UTF_8)), StandardCharsets.UTF_8)));
dream.learn(rules, new File("file://" + etherpad_dream));

When we call dream.learn function, Susi starts dreaming. Now we will try to find an answer out of the dream.

The SusiCognition takes all the parameters including the dream name, query and user’s identity and then try to find the answer to that query based on the skill it just learnt.

    SusiCognition cognition = new SusiCognition(dream, q, timezoneOffset, latitude, longitude, language, count, user.getIdentity());

If SUSI finds an answer then it replies back with that answers else it tries to answer with built-in intents. These intents are both existing SUSI Skills and internal console services.

if (cognition.getAnswers().size() > 0) {          DAO.susi.getMemories().addCognition(user.getIdentity().getClient(), cognition);
    return new ServiceResponse(cognition.getJSON());
}

This is how any query is processed by SUSI Server and the output is sent to clients.

Currently people use Etherpad (http://dream.susi.ai/) to make their own skills, but we are also working on SUSI CMS where we can edit and test the skills on the same web application.

References

Java Servlet Docs: http://docs.oracle.com/javaee/6/tutorial/doc/bnafd.html

Embedding Jetty: http://www.eclipse.org/jetty/documentation/9.4.x/embedding-jetty.html

Server-Side Java: http://www.javaworld.com/article/2077634/java-web-development/how-to-get-started-with-server-side-java.html

Continue ReadingAdding Servlet for SUSI Service