Testing Deploy Functions Using Sinon.JS in Yaydoc

In yaydoc, we deploy the generated documentation to the GitHub pages as well as Heroku. It is one of the important functions in the source code. I don’t want to break the build in future by any unnoticed change, so I decided to write a test case for deploy function. But the deploy function had lot dependencies like child processes, sockets, etc. Also it is not a pure function, so there is no return object to assert the value. Then I decided to stub for child process to check whether the correct script was passed or not. In order to write stub I decided to use sinon js framework because it can be used for writing stubs, mocks and spies. One of the advantages with sinon is that it’ll work with any testing framework. sinon.stub(require("child_process"), "spawn").callsFake(function (fileName, args) { if (fileName !== "./ghpages_deploy.sh" ) { throw new Error(`invalid ${fileName} invoked`); } if (fileName === "./ghpages_deploy.sh") { let ghArgs = ["-e", "-i", "-n", "-o", "-r"]; ghArgs.forEach(function (x) { if (args.indexOf(x) < 0) { throw new Error(`${x} argument is not passed`); } }) } let process = { on: function (listenerId, callback) { if (listenerId !== "exit") { throw new Error("listener id is not exit"); } } } return process; }); In sinon you can create s stub by passing the object in the first parameter and the method name in the second parameter to sinon’s stub method. After it returns an object, pass the function which you would like to replace with the “callFakes” function. In above code, I wrote a simple stub which overwrites NodeJS child_process’s spawn method. So I passed the “child_process” module in the first parameter and “spawn” method name in the second parameter. You must check whether they are passing the correct deploy script and the correct parameter. So, I wrote a function which checks the condition and then pass the method to the callFakes method. describe('deploy script', function() { it("gh-pages deploy", function() { deploy.deployPages(fakeSocket, { gitURL: "https://github.com/sch00lb0y/yaydoc.git", encryptedToken: crypter.encrypt("dummykey"), email: "admin@fossasia.org", uniqueId: "ajshdahsdh", username: "fossasia" }); }); }); Finally test the deploy function by calling it. I use mocha as a testing framework. I have already written a blog on mocha. If you’re interested in mocha please check out this blog. Resources: Best Practices for Spies, Stubs and Mocks in SinonJS Unit Test like a Secret Agent with SinonJS AJAX and SinonJS

Continue ReadingTesting Deploy Functions Using Sinon.JS in Yaydoc

Continuous Integration in Yaydoc using GitHub webhook API

In Yaydoc,  Travis is used for pushing the documentation for each and every commit. But this leads us to rely on a third party to push the documentation and also in long run it won’t allow us to implement new features, so we decided to do the continuous documentation pushing on our own. In order to build the documentation for each and every commit we have to know when the user is pushing code. This can be achieved by using GitHub webhook API. Basically we have to register our api to specific GitHub repository, and then GitHub will send a POST request to our API on each and every commit. “auth/ci” handler is used to get access of the user. Here we request user to give access to Yaydoc such as accessing the public repositories , read organization details and write permission to write webhook to the repository and also I maintaining state by keeping the ci session as true so that I can know that this callback is for gh-pages deploy or ci deployOn On callback I’m keeping the necessary informations like username, access_token, id and email in session. Then based on ci session state, I’m redirecting to the appropriate handler. In this case I’m redirecting to “ci/register”.After redirecting to the “ci/register”, I’m getting all the public repositories using GitHub API and then I’m asking the users to choose the repository on which users want to integrate Yaydoc CI After redirecting to the “ci/register”, I’m getting all the public repositories using GitHub API and then I’m asking the users to choose the repository on which users want to integrate Yaydoc CI router.post('/register', function (req, res, next) { request({ url: `https://api.github.com/repos/${req.session.username}/${repositoryName}/hooks?access_token=${req.session.token}`, method: 'POST', json: { name: "web", active: true, events: [ "push" ], config: { url: process.env.HOSTNAME + '/ci/webhook', content_type: "json" } } }, function(error, response, body) { repositoryModel.newRepository(req.body.repository, req.session.username, req.session.githubId, crypter.encrypt(req.session.token), req.session.email) .then(function(result) { res.render("index", { showMessage: true, messages: `Thanks for registering with Yaydoc.Hereafter Documentation will be pushed to the GitHub pages on each commit.` }) }) }) } }) After user choose the repository, they will send a POST request to “ci/register” and then I’m registering the webhook to the repository and I’m saving the repository, user details in the database, so that it can be used when GitHub send request to push the documentation to the GitHub Pages. router.post('/webhook', function(req, res, next) { var event = req.get('X-GitHub-Event') if (event == 'Push') { repositoryModel.findOneRepository( { githubId: req.body.repository.owner.id, name: req.body.repository.name } ). then(function(result) { var data = { email: result.email, gitUrl: req.body.repository.clone_url, docTheme: "", } generator.executeScript({}, data, function(err, generatedData) { deploy.deployPages({}, { email: result.email, gitURL: req.body.repository.clone_url, username: result.username, uniqueId: generatedData.uniqueId, encryptedToken: result.accessToken }) }) }) res.json({ status: true }) } }) After you register on webhook, GitHub will send a request to the url which we registered on the repository. In our case “https:/yaydoc.herokuapp.com/ci/auth” is the url. The type of the event can be known by reading 'X-GitHub-Event' header. Right now I’m registering only for the push event. So…

Continue ReadingContinuous Integration in Yaydoc using GitHub webhook API

Generating responsive email using mjml in Yaydoc

In Yaydoc, an email with a download, preview and deploy link will be sent to the user after documentation is generated. But then initially, Yaydoc was sending email in plain text without any styling, so I decided to make an attractive HTML email template for it. The problem with HTML email is adding custom CSS and making it responsive, because the emails will be seen on various devices like mobile, tablet and desktops. When going through the GitHub trending list, I came across mjml and was totally stunned by it’s capabilities. Mjml is a responsive email generation framework which is built using React (popular front-end framework maintained by Facebook) Install mjml to your system using npm. npm init -y && npm install mjml Then add mjml to your path export PATH="$PATH:./node_modules/.bin” Mjml has a lot of react components pre-built for creating the responsive email. For example mj-text, mj-image, mj-section etc… Here I’m sharing the snippet used for generating email in Yaydoc. <mjml> <mj-head> <mj-attributes> <mj-all padding="0" /> <mj-class name="preheader" color="#CB202D" font-size="11px" font-family="Ubuntu, Helvetica, Arial, sans-serif" padding="0" /> </mj-attributes> <mj-style inline="inline"> a { text-decoration: none; color: inherit; } </mj-style> </mj-head> <mj-body> <mj-container background-color="#ffffff"> <mj-section background-color="#CB202D" padding="10px 0"> <mj-column> <mj-text align="center" color="#ffffff" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" padding="18px 0px">Hey! Your documentation generated successfully<i class="fa fa-address-book-o" aria-hidden="true"></i> </mj-text> </mj-column> </mj-section> <mj-section background-color="#ffffff" padding="20px 0"> <mj-column> <mj-image src="https://res.cloudinary.com/template-gdg/image/upload/v1498552339/play_cuqe89.png" width="85px" padding="0 25px"> </mj-image> <mj-text align="center" color="#EC652D" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" vertical-align="top" padding="20px 25px"> <strong><a>Preview it</a></strong> <br /> </mj-text> </mj-column> <mj-column> <mj-image src="https://res.cloudinary.com/template-gdg/image/upload/v1498552331/download_ktlqee.png" width="100px" padding="0 25px" > </mj-image> <mj-text align="center" color="#EC652D" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" vertical-align="top" padding="20px 25px"> <strong><a>Download it</a></strong> <br /> </mj-text> </mj-column> <mj-column> <mj-image src="https://res.cloudinary.com/template-gdg/image/upload/v1498552325/deploy_yy3oqw.png" width="100px" padding="0px 25px" > </mj-image> <mj-text align="center" color="#EC652D" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" vertical-align="top" padding="20px 25px"> <strong><a>Deploy it</a></strong> <br /> </mj-text> </mj-column> </mj-section> <mj-section background-color="#333333" padding="10px"> <mj-column> <mj-text align="center" color="#ffffff" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" padding="18px 0px">Thanks for using Yaydoc<i class="fa fa-address-book-o" aria-hidden="true"></i> </mj-column> </mj-text> </mj-section> </mj-container> </mj-body> </mjml> The main goal of this example is to make a responsive email which looks like the image given below. So, In mj-head tag, I have imported all the necessary fonts using the mj-class tag and wrote my custom CSS in mj-style. Then I made a container with one row and one column using mj-container, mj-section and mj-column tag and changed the container background color to #CB202D using background-color attribute, then In that container I wrote a heading which says `Hey! Your documentation generated successfully`  with mj-text tag, Then you will get the red background top bar with the success message. Then moving on to the second part, I made a container with three columns and added one image to each column using mj-image tag by specifying image URL as src attribute, added the corresponding text below the mj-image tag using the mj-text tag. At last,  I  made one more container as the first one with different message saying `Thanks for using yaydoc`  with background color #333333 At last, transpile your mjml code to HTML by executing the following…

Continue ReadingGenerating responsive email using mjml in Yaydoc

Testing child process using Mocha in Yaydoc

Mocha is a javascript testing framework. It can be used in both nodeJS and browser as well, also it is one of the most popular testing framework available out there. Mocha is widely used for the Behavior Driven Development (BDD). In yaydoc, we are using mocha to test our web UI. One of the main task in yaydoc is documentation generation. We build a bash script to do our documentation generation. We run the bash script using node’s child_process module, but then in order to run the test you have to execute the child process before test execution. This can be achieved by mochas’s before hook. Install mocha in to your system npm install -g mocha Here is the test case which i wrote in yaydoc test file. const assert = require('assert') const spawn = require('child_process').spawn const uuidV4 = require("uuid/v4") describe('WebUi Generator', () => { let uniqueId = uuidV4() let email = 'fossasia@gmail.com' let args = [ "-g", "https://github.com/fossasia/yaydoc.git", "-t", "alabaster", "-m", email, "-u", uniqueId, "-w", "true" ] let exitCode before((done) => { let process = spawn('./generate.sh', args) process.on('exit', (code) => { exitCode = code done() }) }) it('exit code should be zero', () => { assert.equal(exitCode, 0) }) }) Describe() function is used to describe our test case. In our scenario we’re testing the generate script so we write as webui generator. As I mentioned above we have to run our child_process in before hook. It() function is the place where we write our test case. If the test case fails, an error will be thrown. We use the assert module from mocha to do the assertion. You can see our assertion in first it()  block for checking exit code is zero or not. mocha test.js --timeout 1500000 Since documentation takes time so we have to mention time out while running mocha. If your test case passes successfully, you will get output similar to this. WebUi Generator ✓ exit code should be zero Resources: Getting started with Mocha and Node.js How to test JS with Mocha Unit Test with Mocha  

Continue ReadingTesting child process using Mocha in Yaydoc

Scraping in JavaScript using Cheerio in Loklak

FOSSASIA recently started a new project loklak_scraper_js. The objective of the project is to develop a single library for web-scraping that can be used easily in most of the platforms, as maintaining the same logic of scraping in different programming languages and project is a headache and waste of time. An obvious solution to this was writing scrapers in JavaScript, reason JS is lightweight, fast, and its functions and classes can be easily used in many programming languages e.g. Nashorn in Java. Cheerio is a library that is used to parse HTML. Let’s look at the youtube scraper. Parsing HTML Steps involved in web-scraping: HTML source of the webpage is obtained. HTML source is parsed and The parsed HTML is traversed to extract the required data. For 2nd and 3rd step we use cheerio. Obtaining the HTML source of a webpage is a piece of cake, and is done by function getHtml, sync-request library is used to send the “GET” request. Parsing of HTML can be done using the load method by passing the obtained HTML source of the webpage, as in getSearchMatchVideos function. var $ = cheerio.load(htmlSourceOfWebpage);   Since, the API of cheerio is similar to that of jquery, as a convention the variable to reference cheerio object which has parsed HTML is named “$”. Sometimes, the requirement may be to extract data from a particular HTML tag (the tag contains a large number of nested children tags) rather than the whole HTML that is parsed. In that case, again load method can be used, as used in getVideoDetails function to obtain only the head tag. var head = cheerio.load($("head").html()); “html” method provides the html content of the selected tag i.e. <head> tag. If a parameter is passed to the html method then the content of selected tag (here <head>) will be replaced by the html of new parameter. Extracting data from parsed HTML Some of the contents that we see in the webpage are dynamic, they are not static HTML. When a “GET” request is sent the static HTML of webpage is obtained. When Inspect element is done it can be seen that the class attribute has different value in the webpage we are using than the static HTML we obtain from “GET” request using getHtml function. For example, inspecting the link of one of suggested videos, see the different values of class attribute :   In website (for better view): In static HTML, obtained from “GET” request using getHtml function (for better view): So, it is recommended to do a check first, whether attributes have same values or not, and then proceed accordingly. Now, let’s dive into the actual scraping stuff. As most of the required data are available inside head tag in meta tag. extractMetaAttribute function extracts the value of content attribute based on another provided attribute and its value. function extractMetaAttribute(cheerioObject, metaAttribute, metaAttributeValue) { var selector = 'meta[' + metaAttribute + '="' + metaAttributeValue + '"]'; return cheerioFunction(selector).attr("content"); } “cheerioObject” here will be the “head”…

Continue ReadingScraping in JavaScript using Cheerio in Loklak

The Mission Mars Challenge with NodeJS and Open Source Bot Framework Emulator

Commissioned under the top-secret space project, our first human team had set foot months ago. This mission on the red planet begun with the quest to establish civilization by creating our first outpost on an extraterrestrial body. Not so long ago, the mission control lost contact with the crew, and we are gathering the best of mankind to help save this mission. In this rescue mission, you will learn to create a bot using an open source framework and tools. You will be given access to our code repositories and other technical resources. We have 3 mission and 2 code challenge to solve in order to bring the Mars mission back on track. We need you! Be the first to crack the problems and rescue the compromised mission! Your bounty awaits! Receive your mission briefing at the control centre after checking-in at the FOSS Asia Summit! How to enter: Join us on March 18 at Foss Asia Summit (Singapore Science Center), Tinker Lab (Hall E) at the following timeslots:3 9:30 11:30 13:30 Bring your own PC or load one from the mission control. We provide internet access at the lab room. Fill up the registration form and check in with the form at the Mission Control. Mission briefing will be provided, you will be given access to the github where you mission resources will be provided, and you can proceed to crack the challenges. Badge of honors to be earned and bounty awaits the team with the best-time! Winners to be announced at 17:30! Be there! Installations needed: NodeJS (https://nodejs.org/en/) Any Code Editor (Visual Studio Code/Atom/Sublime Text etc.) Open Source Bot Framework Emulator (https://emulator.botframework.com/)

Continue ReadingThe Mission Mars Challenge with NodeJS and Open Source Bot Framework Emulator

Getting code coverage in a Nodejs project using Travis and CodeCov

We had set up unit tests on the webapp generator using mocha and chai, as I had blogged before. But we also need to get coverage reports for each code commit and the overall state of the repo. Since it is hosted on Github, Travis comes to our rescue. As you can see from our .travis.yml file, we already had Travis running to check for builds, and deploying to heroku. Now to enable Codecov, simply go to http://codecov.io and enable your repository (You have to login with Github so see your Github repos) . Once you do it, your dashboard should be visible like this https://codecov.io/github/fossasia/open-event-webapp We use istanbul to get codecoverage. To try it out just use istanbul cover _mocha On the root of your project (where the /test/ folder is ) . That should generate a folder called coverage or lcov. Codecov can read lcov reports. They have provided a bash file which can be run to automatically upload coverage reports. You can run it like this - bash <(curl -s https://codecov.io/bash) Now go back to your codecov dashboard, and your coverage report should show up. If all is well, we can integrate this with travis so that it happens on every code push. Add this to your travis.yml file. script: - istanbul cover _mocha after_success: - bash <(curl -s https://codecov.io/bash) This will ensure that on each push, we run coverage first. And if it is successful, we push the result to codecov. We can see coverage file by file like this And we can see coverage line by line in a file like this  

Continue ReadingGetting code coverage in a Nodejs project using Travis and CodeCov

sTeam REST API Unit Testing

(ˢᵒᶜⁱᵉᵗʸserver) aims to be a platform for developing collaborative applications. sTeam server project repository: sTeam. sTeam-REST API repository: sTeam-REST Unit Testing the sTeam REST API The unit testing of the sTeam REST API is done using the karma and the jasmine test runner. The karma and the jasmine test runner are set up in the project repository. The karma test runner : The main goal for Karma is to bring a productive testing environment to developers. The environment being one where they don’t have to set up loads of configurations, but rather a place where developers can just write the code and get instant feedback from their tests. Because getting quick feedback is what makes you productive and creative. The jasmine test runner: Jasmine is a behavior-driven development framework for testing JavaScript code. It does not depend on any other JavaScript frameworks. It does not require a DOM. And it has a clean, obvious syntax so that you can easily write tests. The karma and jasmine test runner were configured for the project and basic tests were ran. The angular js and angular mocks version in the local development repository was different. This had resulted into a new error been incorporated into the project repo. The 'angular.element.cleanData is not a function' error is thrown in the local development repository. This error happens when the local version of the angular.js and angular-mocks.js doesn't match. The testing framework would test you if the versions f the two libraries is not the same. The jasmine test runner can be accessed from the browser. The karma tests can be performed from the command line. To access the jasmine test runner from the web browser, go to the url http://localhost:7000/test/unit/runner.html To run the karma test suite, run the following command $ karma start The unit tests of the sTeam REST service were done using jasmine. The unit tests were written in coffee script. The preprocessor to compile the files from coffee script to javascript is defined in the karma configuration file. Jasmine Test Runner Jasmine Test Failure First a dummy pass case and a fail case is tested to check there are no errors in the test suite during the test execution. The localstoragemodule.js which is used in the steam service is injected in the test module. Then the steam service version is tested. describe 'Check version of sTeam-service', -> it 'should return current version', inject (version) -> expect(version).toEqual('0.1') steam service should be injected in a global variable as the same service functions shall be tested while performing the remaining tests. Then the steam service is injected and checked whether it exists or not. beforeEach inject (_steam_) -> steam= _steam_ describe 'Check sTeam service injection', -> it 'steam service should exist', -> expect(steam).toBeDefined() The sTeam service has both private and public functions. The private functions cannot be accessed from outside. The private functions defined in the sTeam service arehandle_request and headers. describe 'Check sTeam service functions are defined.', -> describe ' Check the sTeam REST API private functions.', ->…

Continue ReadingsTeam REST API Unit Testing

sTeam API Endpoint Testing

(ˢᵒᶜⁱᵉᵗʸserver) aims to be a platform for developing collaborative applications. sTeam server project repository: sTeam. sTeam-REST API repository: sTeam-REST sTeam API Endpoint Testing using Frisby sTeam API endpoint testing is done using Frisby.  Frisby is a REST API testing framework built on node.js and Jasmine that makes testing API endpoints very easy, speedy and joyous. Issue. Github Issue Github PR sTeam-REST Frisby Test for login Issue-38 PR-40 sTeam-REST Frisby Tests Issue-41 PR-42 Write Tests Frisby tests start with frisby.create with a description of the test followed by one of get, post, put, delete, or head, and ending with toss to generate the resulting jasmine spec test. Frisby has many built-in test helpers like expectStatus to easily test HTTP status codes, expectJSON to test expected JSON keys/values, and expectJSONTypes to test JSON value types, among many others. // Registration Tests frisby.create('Testing Registration API calls') .post('http://steam.realss.com/scripts/rest.pike?request=register', { email: "ajinkya007.in@gmail.com", fullname: "Ajinkya Wavare", group: "realss", password: "ajinkya", userid: "aj007" }, {json: true}) .expectStatus(200) .expectJSON({ "request-method": "POST", "request": "register", "me": restTest.testMe, "__version": testRegistrationVersion, "__date": testRegistrationDate }) .toss(); The testMe, testRegistrationVersion and testRegistrationDate are the functions written in the rest_spec.js. The frisby API endpoint tests have been written for testing the user login, accessing the user home directory, user workarea, user container, user document, user created image,  groups and subgroups. The REST API url's used for testing are described below. A payload consists of the user id and password. Check if the user can login. http://steam.realss.com/scripts/rest.pike?request=aj007 Test whether a user workarea exists or not. Here aj workarea has been created by the user. http://steam.realss.com/scripts/rest.pike?request=aj007/aj Test whether a user created container exists or not. http://steam.realss.com/scripts/rest.pike?request=aj007/container Test whether a user created document exists or not. http://steam.realss.com/scripts/rest.pike?request=aj007/abc.pike Test whether a user created image(object of any mime-type) inside a container exists or not. http://steam.realss.com/scripts/rest.pike?request=aj007/container/Image.jpeg Test whether a user created document exists or not. The group name and the subgroups can be queried. eg. GroupName: groups, Subgroup: test. The subgroup should be appended using "." to the groupname. http://steam.realss.com/scripts/rest.pike?request=groups.test Here "groups" is a Groupname and "gsoc" is a subgroup of it. http://ngtg.techgrind.asia/scripts/rest.pike?request=groups.gsoc Unit Testing the sTeam REST API The unit testing of the sTeam REST API is done using the karma and the jasmine test runner. The karma and the jasmine test runner are set up in the project repository. The karma test runner : The main goal for Karma is to bring a productive testing environment to developers. The environment being one where they don't have to set up loads of configurations, but rather a place where developers can just write the code and get instant feedback from their tests. Because getting quick feedback is what makes you productive and creative. The jasmine test runner: Jasmine is a behavior-driven development framework for testing JavaScript code. It does not depend on any other JavaScript frameworks. It does not require a DOM. And it has a clean, obvious syntax so that you can easily write tests. The karma and jasmine test runner were configured for the project and basic tests were…

Continue ReadingsTeam API Endpoint Testing

Generating Documentation and Modifying the sTeam-REST API

(ˢᵒᶜⁱᵉᵗʸserver) aims to be a platform for developing collaborative applications. sTeam server project repository: sTeam. sTeam-REST API repository: sTeam-REST Documentation Documentation is an important part of software engineering. Types of documentation include: Requirements - Statements that identify attributes, capabilities, characteristics, or qualities of a system. This is the foundation for what will be or has been implemented. Architecture/Design - Overview of software. Includes relations to an environment and construction principles to be used in design of software components. Technical - Documentation of code, algorithms, interfaces, and APIs. End user - Manuals for the end-user, system administrators and support staff. Marketing - How to market the product and analysis of the market demand. Doxygen Doxygen is the de facto standard tool for generating documentation from annotated C++ sources, but it also supports other popular programming languages such as C, Objective-C, C#, PHP, Java, Python, IDL (Corba, Microsoft, and UNO/OpenOffice flavors), Fortran, VHDL, Tcl, and to some extent D. The Doxygen treats files of other languages as C/C++ and creates documentation for it accordingly. sTeam documentation was tried to be created with the doxygen. But empty documentation was created due to the lack of the doxygen annotations used in the project.  Doxygen doc generation. Doxygen Docs The next way to create documentation was to make use of the autodoc utility provided by the Pike. The utility to generate docs was provided in the later versions of the Pike(>=8.0.155). The autodoc files are generated and  later these are converted into  html pages. The commands used for generating the autodoc include:- pike -x extract_autodoc /source pike -x autodoc_to_html /src /opfile The autodoc_to_html utility converts a single autodoc file to an html page. As a result a shell script was written to convert all the generated autodoc files to the html file. docGenerator.sh #!/bin/bash shopt -s globstar for filename in ./**/*.pike.xml; do outputFile=doc/${filename#./} outputFile=${outputFile%.xml}."html" if [ -d $(dirname "./"$outputFile) ]; then touch "./"$outputFile else mkdir -p $(dirname "./"$outputFile) && touch "./"$outputFile fi pike -x autodoc_to_html $filename "./"$outputFile done Autodoc Documentation The documentation generated by this was less informative and lacked the referrals to other classes and headers. The societyserver project was developed long back but the autodoc utility was introduced in the later versions of pike. As a result the source files lacked the autodoc tags which are required to generate a well informative documentation with bindings to other files. Restructuring the sTeam-REST API The sTeam-REST API project made use of the angular-seed to initiate the development during the early phases. However these files still existed in the project. This had lead to a pandemonium and created difficulty in understanding the project. The files had to be removed and the app was in dire need of a restructuring. The following issues have been reported and resolved. Issue. Github Issue Github PR sTeam-REST Issues Issues PR The new UI can be seen below. Home Register About Testing the REST API The functionality to run the tests using the npm test command was added to the project.…

Continue ReadingGenerating Documentation and Modifying the sTeam-REST API