Differentiating between received file formats for Event Apps

Sometimes we need to operate on files of different formats in the same app itself for example Giggity app parses the file in iCal, xCal, pentabarf and now in Open Event JSON format.

It could be problematic as every format needs different type of manipulation to read and get the relevant data from it, for example the JSON format is based on objects and arrays in it from which we can get the data by referencing titles of the data while pentabarf XML identifies the elements with tags. Also it is not evident from the link itself which format is being received. So to see which type of file is received instead of analyzing the entire file we look for the few initials of the syntax from which it becomes evident.

I recently used this method to differentiate between the JSON and iCal format to display sessions of an event in the Giraffe app in Open Event JSON format which is also being used in Giggity.

Let’s have a look on how to actually implement this. It is simple after you get the data from the url. Instead of iterating for the entire data we read the initial data only to see what kind of file we have received.

BufferedReader r = new BufferedReader(new InputStreamReader(inputStream));
String line = r.readLine();

if(line.contains("BEGIN:VCALENDAR")) {
 SharedPreferences.Editor edit = prefs.edit();
 edit.putString("type", "ical");
 edit.commit();
 Log.e("Response type","ical");
} 
else if(line.contains("{")) {
 SharedPreferences.Editor edit = prefs.edit();
 edit.putString("type", "json");
 edit.commit();
 Log.e("Response type","json");
}

We saved the type of response received in the shared preferences so we retrieve it later from anywhere in the app to see what kind of data we have received instead of passing it in between functions or activities.

See this to check as you download the data from the URL asynchronously, so you don’t need to do it later or manipulations needs to be done asynchronously too.

So when you are checking in asynchronously using AsyncTask class get shared preferences in onPreExecute() and update it in doInBackground() as we download the data and check.

The other advantage of this method is that we can close the thread or return to the main activity if the file format found is not readable or not of desired format.

Continue ReadingDifferentiating between received file formats for Event Apps

Map Responses from SUSI Server

Giving user responses in Maps is a very common reply of various assistants. Android and iOS clients of SUSI.AI can Render their own custom map views but there is a challenge in doing that in Bots and other clients which can only render Images.

The MapServlet in SUSI-Server is a servlet that takes the constraints of a map as input parameters and returns PNG Image of the rendered map.

This is a simple HttpServlet. Since it does not requires any auth or user rights we do not extend AbstactAPIHandler in this class.

 
public class MapServlet extends HttpServlet {

This is to method which runs whenever the Servlet receives a GET Query. After getting the query the function call another function “process(…)” which processes the input parameters.

 
@Override
protected void doGet(HttpServletRequest request, HttpServletResponse          response) throws ServletException, IOException {
Query post = RemoteAccess.evaluate(request);

This line is the for DDOS Protection.  It checks if the user query rate is too high and rate limit his queries. If Server suspects that the post is a DDOS then it returns a 503 error.

if (post.isDoS_blackout()) {
  response.sendError(503, "your request frequency is too high"); return;
} 
process(request, response, post);

The process method get the zoom, latitude, longitude, width and the height of the map and checks for the errors in them.

if (map.getHeight() > height || map.getWidth() > width) {
            BufferedImage bi = map.getImage();

            int xoff = (map.getWidth() - width) / 2;
            int yoff = (map.getHeight() - height) / 2;

            bi = bi.getSubimage(xoff, yoff, width, height);
            map = new RasterPlotter(width, height, RasterPlotter.DrawMode.MODE_REPLACE, "FFFFFF");
            map.insertBitmap(bi, 0, 0);
        }

Then we compute our image using RasterPlotter.

map.setDrawMode(DrawMode.MODE_SUB);
map.setColor(0xffffff);
if (text.length() > 0) PrintTool.print(map, 6, 12, 0, uppercase ? text.toUpperCase() : text, -1, false, 100);
PrintTool.print(map, map.getWidth() - 6, map.getHeight() - 6, 0, "MADE WITH LOKLAK.ORG", 1, false, 50);

Here we calculate the height of the map. If a user passed a height and width in get parameters, we scale the map according to that but if not its 256×256.

    int mx = (int) (map.getWidth() * (lon - west_lon) / (east_lon - west_lon));
    int my = (int) (map.getHeight() * (lat - north_lat) / (south_lat - north_lat));

// the marker has a height of 40 pixel and a width of 25 pixel
    final BufferedImage logo = ImageIO.read(FileSystems.getDefault().getPath("html").resolve("artwork").resolve("marker-red.png").toFile());
    map.insertBitmap(logo, Math.min(map.getWidth() - 25, Math.max(0, mx - 12)), Math.min(map.getHeight() - 40, Math.max(0, my - 40)), FilterMode.FILTER_ANTIALIASING);

We use the code below to set some text on the map for example the name of the place or the city etc.

map.setDrawMode(DrawMode.MODE_SUB);
map.setColor(0xffffff);
if (text.length() > 0) PrintTool.print(map, 6, 12, 0, uppercase ? text.toUpperCase() : text, -1, false, 100);
PrintTool.print(map, map.getWidth() - 6, map.getHeight() - 6, 0, "MADE WITH LOKLAK.ORG", 1, false, 50);

Finally we draw a marker on map

    int mx = (int) (map.getWidth() * (lon - west_lon) / (east_lon - west_lon));
    int my = (int) (map.getHeight() * (lat - north_lat) / (south_lat - north_lat));
      
    final BufferedImage logo = ImageIO.read(FileSystems.getDefault().getPath("html").resolve("artwork").resolve("marker-red.png").toFile());
    map.insertBitmap(logo, Math.min(map.getWidth() - 25, Math.max(0, mx - 12)), Math.min(map.getHeight() - 40, Math.max(0, my - 40)), FilterMode.FILTER_ANTIALIASING);

  At last we set the copyright message of OPENSTREETMAP at the bottom left.

PrintTool.print(map, 6, map.getHeight() - 6, 0, "(C) OPENSTREETMAP CONTRIBUTORS", -1, false, 100);

At the end we write the image and set the headers for Cross Origin Access.

    response.addHeader("Access-Control-Allow-Origin", "*");
        RemoteAccess.writeImage(fileType, response, post, map);
        post.finalize();
    }
}

 The servlet can be tested locally at

http://localhost:4000/vis/map.png?text=Test&mlat=1.28373&mlon=103.84379&zoom=18

Or on the live SUSI Server
http://api.susi.ai/vis/map.png?text=Test&mlat=1.28373&mlon=103.84379&zoom=18

Output:

References:

https://www.openstreetmap.org/

http://wiki.openstreetmap.org/wiki/Java_Access_Example

https://github.com/fossasia/susi_server/

Continue ReadingMap Responses from SUSI Server

Setting up Your Own Custom SUSI AI server

When you chat with any of the SUSI clients, all the requests are made to standard SUSI server. But let us say that you have implemented some features on server and want to test it. Simply launch your server and you get your own SUSI server. You can then change the option to use your own server on the SUSI Android and Web Chat client.

The first step to get your own copy of the SUSI server is browse to https://github.com/fossasia/susi_server and fork it. Clone your origin head to your local machine and make the changes that you want. There are various tutorials on how to add more servlets to SUSI server, how to add new parameters to an existing action type or maybe modification or a similar type of memory servlet.

To start coding, you can either use some IDE like IDEA by Intelij (download it from here) or open the files you want to modify in a text editor. To start the SUSI server via command line terminal (and not IDE) do the following:

 

Open the terminal in the project directory cloned. Now write in the following command (It is expected that you have your JAVA installed with JAVA_PATH variable setup):

./gradlew build

This will install all the project dependencies required for the support of the project. Next execute :

bin/start.sh

This will take a little time but you will soon see a popped up window in your default browser and the server will start. The landing page of the website will launch. Make the modifications in the server you were aiming. The server is up now. You can access it at local ip address here:

http://127.0.0.1/

But if you try to add this local address in the URL field of any client, it will not work and give you an error. To access this server using a client, open your terminal and execute

ipconfig	//if you are running a windows machine{not recommended}
ifconfig	//if you are running a linux machine{recommended for better performance}

 

This will give you a list of various IP. Find Wireless LAN adapter Wi-Fi. Copy the IPv4 IP address. This is the link you need to enter in the server URL field in the client (Do not forget to add http:// before your IP address). Now sign up and you will be good to go.

Additional Resources :

Continue ReadingSetting up Your Own Custom SUSI AI server

Creating inter-component actions in Open Event Front-end

Open Event Front-end project is built using Ember JS, which lets us create modular components. While implementing the sessions route on the project, we faced a challenge of sending inter-component actions. To solve this problem we used the ember action helpers which bubbles up the action to the controller and sends it to the desired component. How did we solve it?

Handling actions in ember

In ember we can handle actions using the {{action ‘function’}} where function executes every time an action on element is triggered. This can be used to handle actions for the component. You can define actions for elements as:

<a href="#" class="item {{if (eq selectedTrackId track.id) 'active'}}" {{action 'filter' track.id}}>
  {{track.name}}
</a>

All the actions defined using the {{action}}  helper are defined inside the actions section of the component. Here the action filter is getting binded to onClick event of the anchor tag. The above helper will pass the name of the track(track) as a parameter to the filter function defined in the component.

Whenever the element is clicked the filter function defined in the component will get triggered. This method works for handling actions within a component, however when we need to trigger inter-component actions this approach does not help.

Sending actions from component to controller

We will send an action from the component to the controller of the route in which the component is rendered using a computed property defined inside the controller, which watched the selectedTrackId.

{{public/session-filter selectedTrackId=selectedTrackId tracks=tracks}}

Whenever the anchor is clicked it passes the id of the selected track to the filter function of the component. Inside the filter function we set the selectedTrackId variable passed to the component inside the route template as specified above.

actions: {
  filter(trackId = null) {
    this.set('selectedTrackId', trackId);
  }
}

The selectedTrackId is a observed by a computed property defined in the controller which modified the track list based on the id passed by the sessions-filter component.

Handling action in the controller

Inside the controller we have a computed property which observes the sessions array and the selectedTrackId which is passed by the session-filter component to the controller called sessionsByTracks.

sessionsByTracks: computed('model.sessions.[]', 'selectedTrackId', function() {
  if (this.get('selectedTrackId')) {
    return chain(this.get('model.sessions'))
      .filter(['track.id', this.get('selectedTrackId')])
      .orderBy(['track.name', 'startAt'])
      .value();
  } else {
    return chain(this.get('model.sessions'))
      .orderBy(['track.name', 'startAt'])
      .groupBy('track.name')
      .value();
  }
}),
tracks: computed('model.sessions.[]', function() {
  return chain(this.get('model.sessions'))
    .map('track')
    .uniqBy('id')
    .orderBy('name')
    .value();
})

sessionsByTracks is the property gets filtered using the lodash functions based upon the id of session passed to it. On the first render all the sessions are displayed as the selectedTrackId is set to null.

{{public/session-list sessions=sessionsByTracks}}

This property is passed to the session-list component which renders the filtered list of session based on the selected session in session-filter component. Check the source code for the example here.

Resources

Continue ReadingCreating inter-component actions in Open Event Front-end

Testing Asynchronous Code in Open Event Orga App using RxJava

In the last blog post, we saw how to test complex interactions through our apps using stubbed behaviors by Mockito. In this post, I’ll be talking about how to test RxJava components such as Observables. This one will focus on testing complex situations using RxJava as the library itself provides methods to unit test your reactive streams, so that you don’t have to go out of your way to set contraptions like callback captors, and implement your own interfaces as stubs of the original one. The test suite (kind of) provided by RxJava also allows you to test the fate of your stream, like confirming that they got subscribed or an error was thrown; or test an individual emitted item, like its value or with a predicate logic of your own, etc. We have used this heavily in Open Event Orga App (Github Repo) to detect if the app is correctly loading and refreshing resources from the correct source. We also capture certain triggers happening to events like saving of data on reloading so that the database remains in a consistent state. Here, we’ll look at some basic examples and move to some complex ones later. So, let’s start.

public class AttendeeRepositoryTest {

    private AttendeeRepository attendeeRepository;

    @Before
    public void setUp() {
        testDemo = new TestDemo();
    }

    @Test
    public void shouldReturnAttendeeByName() {
        // TODO: Implement test
    }

}

 

This is our basic test class setup with general JUnit settings. We’ll start by writing our tests, the first of which will be to test that we can get an attendee by name. The attendee class is a model class with firstName and lastName. And we will be checking if we get a valid attendee by passing a full name. Note that although we will be talking about the implementation of the code which we are writing tests for, but only in an abstract manner, meaning we won’t be dealing with production code, just the test.

So, as we know that Observables provide a stream of data. But here, we are only concerned with one attendee. Technically, we should be using Single, but for generality, we’ll stick with Observables.

So, a person from the background of JUnit would be tempted to write this code below.

Attendee attendee = attendeeRepository.getByAttendeeName("John Wick")
    .blockingFirst();

assertEquals("John Wick", attendee.getFirstName() + attendee.getLastName());

 

So, what this code is doing is blocking the thread till the first attendee is provided in the stream and then checking that the attendee is actually John Wick.

While this code works, this is not reactive. With the reactive way of testing, not only you can test more complex logic than this with less verbosity, but it naturally provides ways to test other behaviors of Reactive streams such as subscriptions, errors, completions, etc. We’ll only be covering a few. So, let’s see the reactive version of the above test.

attendeeRepository.getByAttendeeName("John Wick")
    .firstElement()
    .test()
    .assertNoErrors()
    .assertValue(attendee -> "John Wick".equals(
        attendee.getFirstName() + attendee.getLastName()
    ));

 

So clean and complete. Just by calling test() on the returned observable, we got this whole suite of testing methods with which, not only did we test the name but also that there are no errors while getting the attendee.

Testing for Network Error on loading of Attendees

OK, so let’s move towards a more realistic test. Suppose that you call this method on AttendeeRepository, and that you can fetch attendees from the network. So first, you want to handle the simplest case, that there should be an error if there is no connection. So, if you have (hopefully) set up your project using abstractions for the model using MVP, then it’ll be a piece of cake to test this. Let’s suppose we have a networkUtil object with a method isConnected.

The NetworkUtil class is a dependency of AttendeeRepository and we have set it up as a mock in our test using Mockito. If this is sounding somewhat unfamiliar, please read my previous article “The Joy of Testing with MVP”.

So, our test will look like this

@Test
public void shouldStreamErrorOnNetworkDown() {
    when(networkUtils.isConnected()).thenReturn(false);
    
    attendeeRepository.getAttendees()
        .test()
        .assertErrorMessage("No Network");
}

 

Note that, if you don’t define the mock object’s behavior like I have here, attendeeRepository will likely throw an NPE as it will be calling isConnected() on an undefined object.

With RxJava, you get a whole lot of methods for each use case. Even for checking errors, you get to assert a particular Throwable, or a predicate defining an operation on the Throwable, or error message as I have shown in this case.

Now, if you run this code, it’ll probably fail. That’s because if you are testing this by offloading the networking task to a different thread by using subscribeOn observeOn methods, the test body may be detached from Main Thread while the requests complete. Furthermore, if testing in an application made for Android, you would have use AndroidSchedulers.mainThread(), but as it is an Android dependency, the test will fail. Well actually, crash. There were some workarounds by creating abstractions for even RxJava schedulers, but RxJava2 provides a very convenient method to override the default schedulers in the form of RxJavaPlugins. Similarly, RxAndroidPlugins is present in the rx-android package. Let’s suppose you have the plan to use Schedulers.io() for asynchronous work and want to get the stream on Android’s Main Thread, meaning you use AndroidSchedulers.mainThread() in the observeOn method. To override these schedulers to Schedulers.trampoline() which queues your tasks and performs them one by one, like the main thread, your setUp will include this:

RxJavaPlugins.setIoSchedulerHandler(scheduler ->  Schedulers.trampoline());
RxAndroidPlugins.setInitMainThreadSchedulerHandler(scheduler -> Schedulers.trampoline());

 

And if you are not using isolated tests and need to resume the default scheduler behavior after each test, then you’ll need to add this in your tearDown method

RxJavaPlugins.reset();
RxAndroidPlugins.reset();

Testing for Correct loading of Attendees

Now that we have tested that our Repository is correctly throwing an error when the network is down, let’s test that it correctly loads attendees when the network is connected. For this, we’ll need to mock our EventService to return attendees when queried, since we don’t want our unit tests to actually hit the servers.

So, we’ll need to keep these things in mind:

  • Mock the network until it shows that it is connected to the Internet
  • Mock the EventService to return attendees when queried
  • Call the getter on the attendeeRepository and test that it indeed returned a list of attendees

For these conditions, our test will look like this:

@Test
public void shouldLoadAttendeesSuccessfully() {
    List<Attendee> attendees = Arrays.asList(
        new Attendee(),
        new Attendee(),
        new Attendee()
    );

    when(networkUtils.isConnected()).thenReturn(true);
    when(eventService.getAttendees()).thenReturn(Observable.just(attendees));

    attendeeRepository.getAttendees()
        .test()
        .assertValues(attendees.toArray(new Attendee[attendees.size()]));
}

 

The assertValues function asserts that these values were emitted by the observable. And if you want to be terser, you can even verify that in fact EventService’s getAttendees function was called by

verify(eventService).getAttendees();

 

But the problem in this way is that the getAttendees function returns an observable and just calling it does not necessarily means that it was subscribed, emitting the results, hence we need to test to ensure that it was indeed subscribed. If we call the normal test() function on the observable, it is already subscribed, making the result of testSubscribed always true. In order to test that correctly, let’s look at our final use case.

Testing for saving of Attendees

In the Open Event Orga App, we have strived to create self-sufficient and intelligent classes, thus, our repository is also built this way. It detects that new attendees are loaded from the server and saves them in the database. Now we’d want to test this functionality.

In this test, there is an added dependency of DatabaseRepository for saving the attendees, which we will mock. The conditions for this test will be:

  • Network is connected
  • EventService returns attendees
  • DatabaseRepository mocks the saving of attendees

For DatabaseRepository’s save method, we’ll be returning a Completable, which will notify when the saving of data is completed. The primary purpose of this test will be to assert that this completable is indeed subscribed when the attendee loading is triggered. This will not only ensure that the correct function to save the attendees is called, but also that it is indeed triggered and not just left hanging after the call. So, our test will look like this.

@Test
public void shouldSaveAttendeesInDatabase() {
    List<Attendee> attendees = Arrays.asList(
        new Attendee(),
        new Attendee(),
        new Attendee()
    );

    TestObserver testObserver = TestObserver.create();
    Completable completable = Completable.complete()
        .doOnSubscribe(testObserver::onSubscribe);

    when(networkUtils.isConnected()).thenReturn(true);
    when(databaseRepository.save(attendees)).thenReturn(completable);
    when(eventService.getAttendees()).thenReturn(Observable.just(attendees));

    attendeeRepository.getAttendees()
        .test()
        .assertNoErrors();

    testObserver.assertSubscribed();
}

 

Here, we have created a separate test observable and set it to be subscribed when the Completable is subscribed and we have returned that Completable when the save method is called. In the last, we have asserted that the test observer is indeed subscribed.

You can create more complex use cases and assert subscriptions, errors, the emptiness of a stream and much more, by using the built-in test functionalities of RxJava2. So, that’s all for this blog, you can visit these links for more details on unit testing RxJava

http://fedepaol.github.io/blog/2015/09/13/testing-rxjava-observables-subscriptions/

https://www.infoq.com/articles/Testing-RxJava

Continue ReadingTesting Asynchronous Code in Open Event Orga App using RxJava

Implementing Speech To Text in SUSI iOS

SUSI being an intelligent bot has the capabilities by which the user can provide input in a hands-free mode by talking and not requiring to even lift the phone for typing. The speech to text feature is available in SUSI iOS with the help of the Speech framework which was released alongside iOS 10 which enables continuous speech detection and transcription. The detection is really fast and supports around 50 languages and dialects from Arabic to Vietnamese. The speech recognition API does its heavy tasks of detection on Apple’s servers which requires an internet connection. The same API is also not always available on all newer devices and also provides the ability to check if a specific language is supported at a particular time.

How to use the Speech to Text feature?

  • Go to the view controller and import the speech framework
  • Now, because the speech is transmitted over the internet and uses Apple’s servers for computation, we need to ask the user for permissions to use the microphone and speech recognition feature. Add the following two keys to the Info.plist file which displays alerts asking user permission to use speech recognition and for accessing the microphone. Add a specific sentence for each key string which will be displayed to the user in the alerts.
    1. NSSpeechRecognitionUsageDescription
    2. NSMicrophoneUsageDescription

The prompts appear automatically when the functionality is used in the app. Since we already have the Hot word recognition enabled, the microphone alert would show up automatically after login and the speech one shows after the microphone button is tapped.

3) To request the user for authorization for Speech Recognition, we use the method SFSpeechRecognizer.requestAuthorization.

func configureSpeechRecognizer() {
        speechRecognizer?.delegate = self

        SFSpeechRecognizer.requestAuthorization { (authStatus) in
            var isEnabled = false

            switch authStatus {
            case .authorized:
                print("Autorized speech")
                isEnabled = true
            case .denied:
                print("Denied speech")
                isEnabled = false
            case .restricted:
                print("speech restricted")
                isEnabled = false
            case .notDetermined:
                print("not determined")
                isEnabled = false
            }

            OperationQueue.main.addOperation {

                // handle button enable/disable

                self.sendButton.tag = isEnabled ? 0 : 1

                self.addTargetSendButton()
            }
        }
    }

4)   Now, we create instances of the AVAudioEngine, SFSpeechRecognizer, SFSpeechAudioBufferRecognitionRequest,SFSpeechRecognitionTask

let speechRecognizer = SFSpeechRecognizer(locale: Locale.init(identifier: "en-US"))
var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?
var recognitionTask: SFSpeechRecognitionTask?
let audioEngine = AVAudioEngine()

5)  Create a method called `readAndRecognizeSpeech`. Here, we do all the recognition related stuff. We first check if the recognitionTask is running or not and if it does we cancel the task.

if recognitionTask != nil {
  recognitionTask?.cancel()
  recognitionTask = nil
}

6)  Now, create an instance of AVAudioSession to prepare the audio recording where we set the category of the session as recording, the mode and activate it. Since these might throw an exception, they are added inside the do catch block.

let audioSession = AVAudioSession.sharedInstance()

do {

    try audioSession.setCategory(AVAudioSessionCategoryRecord)

    try audioSession.setMode(AVAudioSessionModeMeasurement)

    try audioSession.setActive(true, with: .notifyOthersOnDeactivation)

} catch {

    print("audioSession properties weren't set because of an error.")

}

7)  Instantiate the recognitionRequest.

recognitionRequest = SFSpeechAudioBufferRecognitionRequest()

8) Check if the device has an audio input else throw an error.

guard let inputNode = audioEngine.inputNode else {

fatalError("Audio engine has no input node")

}

9)  Enable recognitionRequest to report partial results and start the recognitionTask.

recognitionRequest.shouldReportPartialResults = true

recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in

  var isFinal = false // to indicate if final result

  if result != nil {

    self.inputTextView.text = result?.bestTranscription.formattedString

    isFinal = (result?.isFinal)!

  }

  if error != nil || isFinal {

    self.audioEngine.stop()

    inputNode.removeTap(onBus: 0)

    self.recognitionRequest = nil

    self.recognitionTask = nil

  }
})

10) Next, we start with writing the method that performs the actual speech recognition. This will record and process the speech continuously.

  • First, we create a singleton for the incoming audio using .inputNode
  • .installTap configures the node and sets up the buffer size and the format
let recordingFormat = inputNode.outputFormat(forBus: 0)

inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, _) in

    self.recognitionRequest?.append(buffer)

}

11)  Next, we prepare and start the audio engine.

audioEngine.prepare()

do {

  try audioEngine.start()

} catch {

  print("audioEngine couldn't start because of an error.")

}

12)  Create a method that stops the Speech recognition.

func stopSTT() {

    print("audioEngine stopped")

    audioEngine.inputNode?.removeTap(onBus: 0)

    audioEngine.stop()

    recognitionRequest?.endAudio()

    indicatorView.removeFromSuperview()



    if inputTextView.text.isEmpty {

        self.sendButton.setImage(UIImage(named: ControllerConstants.mic), for: .normal)

    } else {

        self.sendButton.setImage(UIImage(named: ControllerConstants.send), for: .normal)

    }

        self.inputTextView.isUserInteractionEnabled = true
}

13)  Update the view when the speech recognition is running indicating the user its status. Add below code just below audio engine preparation.

// Listening indicator swift

self.indicatorView.frame = self.sendButton.frame

self.indicatorView.isUserInteractionEnabled = true

let gesture: UITapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(startSTT))

gesture.numberOfTapsRequired = 1

self.indicatorView.addGestureRecognizer(gesture)
self.sendButton.setImage(UIImage(), for: .normal)

indicatorView.startAnimating()

self.sendButton.addSubview(indicatorView)

self.sendButton.addConstraintsWithFormat(format: "V:|[v0(24)]|", views: indicatorView)

self.sendButton.addConstraintsWithFormat(format: "H:|[v0(24)]|", views: indicatorView)

self.inputTextView.isUserInteractionEnabled = false

The screenshot of the implementation is below:

       

References

Continue ReadingImplementing Speech To Text in SUSI iOS

Deploying documentations generated by Yaydoc to Heroku

There are many web applications available online that generates static websites. Among these projects are two unique projects developed here at FOSSASIA. These are the Open Event WebApp Generator and Yaydoc (an automatic documentation generation and deployment project.). Since Yaydoc already supports the deployment of the generated documentations to Github pages, it was just a matter of time that the deployment to Heroku is also supported.

Heroku is an excellent cloud-based platform used as a web application deployment service. Heroku provides most of its services at free of cost to the users and is excellent to host static websites provided that a little bit of tweaking is done.

For this implementation, we use the `Platform API` provided by Heroku. Stating it’s description mentioned in the documentation,

The platform API empowers developers to automate, extend and combine Heroku with other services. You can use the platform API to programmatically create apps, provision add-ons and perform other tasks that could previously only be accomplished with Heroku toolbelt or dashboard.

In order to deploy the static websites to Heroku, we need to first prepare a bundle of source code that has been compiled and is ready for execution on the Heroku runtime. This bundle is known as a Slug.

cd temp/$EMAIL/${UNIQUE_ID}_preview
mkdir -p app
cd app

curl https://nodejs.org/dist/v6.11.0/node-v6.11.0-linux-x64.tar.gz | tar xzv > /dev/null

cp $BASE/web.js
rsync -av --progress ../ . --exclude app

cd ..
tar czfv slug.tgz ./app > /dev/null

We are using the files generated for preview to bundle them in a slug. Also, we download the NodeJS runtime files since we are deploying a static website to Heroku. Along with the static files, we require bundling a NodeJS server file (web.js) that will be used to reference the static files in the application.

After preparing the Slug, we publish the static web application to Heroku. For this, we start by creating a Heroku app using the command `heroku create <app-name>`. The app name is decided by the user when he or she fills the form in the Yaydoc Web App. Following that, we request Heroku to allocate a new slug for your app. After that, we upload the slug tar file to the platform.

# Create Heroku 
heroku create $APP_NAME

# Allocating new Slug
Arr=($(curl -u “:$API_KEY” -X \
-H ‘Content-Type:application/json’ \
-H ‘Accept: application/vnd.heroku+json;version=3’ \
-d ‘{“process_types”:{“web”:”node-v6.11.0-linux-x64/bin/node web.js”}}’ \
-n https://api.heroku.com/apps/${APP_NAME}/slugs | \
python -c “import sys,json; obj=json.load(sys.stdin);
print(obj[‘blob’][‘url’] + ‘\n’ + obj[‘id’])”))

# Upload the slug tar file
curl -X PUT \
-H “Content-Type:”\
--data-binary @slug.tgz \
“${Arr[0]}”

After uploading the slug to Heroku, we need to release the app. This is done using the following command.

curl -u “:$API_KEY” -X POST \
-H “Accept: application/vnd.heroku+json; version=3” \
-H “Content-Type: application/json” \
-d ‘{“slug”:”’${Arr[1]}’”}’ \
-n https://api.heroku.com/apps/$APP_NAME/releases

Releasing the application completes the process of deployment, making the documentation generated by Yaydoc up and running at the following URL: https://<app-name>.herokuapp.com/

 

Continue ReadingDeploying documentations generated by Yaydoc to Heroku

Editing a file stored in the webserver from the Yaydoc Web App

As a developer, working on a web application, you may want your users to be able to edit a file stored in your webserver. There may be certain use cases in which this may be required. Consider, for instance, its use case in Yaydoc.

Yaydoc allows its users the feature of continuous deployment of their documentation by adding certain configurations in their .travis.yml file. It is possible for Yaydoc to achieve the editing of the Travis file from the Web App itself.

To enable the support of certain functionality in your web application, I have prepared a script using ExpressJS and Socket.IO which can perform the following action. At the client side, we define a retrieve-file event which emits a request to the server. At the server side, we handle the event by executing a retrieveContent(...) function which uses spawn method of child_process to execute a script that retrieves the content of a file.

// Client Side
$(function () {
 var socket = io();
 ....
 ....
 $(“#editor-button”).click(function () {
   socket.emit(“retrieve-file”);
 });
 ....
 ....
});

// Server Side
io.on(“connection”, function (socket) {
 socket.on(“retrieve-file”, function () {
   retrieveContent(socket);
 });
 ....
 ....
});

var retrieveContent = function (socket) {
 var process = require(“child_process”).spawn(“cat”, [“.travis.yml”]);
 process.stdout.on(“data”, function (data) {
   socket.emit(“file-content”, data);
 });
};

After the file content is retrieved from the server, we use a Javascript Editor like ACE to edit the content of the file. Making all the changes to the file, we emit a store-content event. At the server side, we handle the event by executing a storeContent(…) function which uses exec method of child_process to execute a bash script that stores the content to the same file.

$(function () {
 var socket = io();
 var editor = ace.edit(“editor”);
 ....
 ....
 socket.on(“file-content”, function (data) {
   editor.setValue(data, -1);
 });
 ....
 ....
 $(“#save-modal”).click(function () {
   socket.emit(“store-content”, editor.getValue());
 });
});

io.on(“connection”, function (socket) {
 ....
 socket.on(“store-content”, function (data) {
   storeContent(socket, data);
 });
});

var storeContent = function (socket, data) {
 var script = ‘truncate -s 0 .travis.yml && echo “’ + data + ‘“ >> .travis.yml’;
 var process = require(“child_process”).exec(script);

 process.on(“exit”, function (code) {
   if (code === 0) {
     socket.emit(“save-successful”);
   }
 });
};

After successful execution of the script, a successful event is sent to the client-side which can then be handled.

A minimal sample of this application can be found at: https://github.com/imujjwal96/socket-editor

Continue ReadingEditing a file stored in the webserver from the Yaydoc Web App

Setting up Yaydoc on Heroku

Yaydoc takes as its input the information about a user’s repository containing the documentation in Markup files and generates a static website from it. The website also includes search functionality within the documentation. It supports various built-in and custom Sphinx themes.

Since the Web User Interface is now prepared with some solid features, it was time to deploy. We chose Heroku for this because of the ease with which we can build and scale the application at free of cost.

Yaydoc consists of two components; A Web User Interface and the Generation and Deployment Scripts. The Web UI being developed with NodeJs and the scripts involving Python modules, require us to include the following buildpacks

  • heroku/nodejs
  • heroku/python
  • https://github.com/imujjwal96/heroku-buildpack-pandoc.git

We need to set certain Environment Variables in Heroku for proper functioning of the Yaydoc. These include

  • CALLBACKURL – URL where Github must return to after successful authentication
  • CLIENTID – Unique Client-Id generated by Github OAuth Application
  • CLIENTSECRET – Unique Client-Secret generated by Github OAuth Application
  • ENCRYPTION_KEY – Required to encrypt Personal Access Token of the user
  • ON_HEROKU – True, since the application is deployed to Heroku
  • PYPANDOC_PANDOC – Location of Pandoc binaries
  • SECRET – A very secret token

Steps for Manual Deployment

  1. Install Heroku on your local machine.

    • If you have a linux based Operating Systems, type the following command in the terminal
wget -qO- https://cli-assets.heroku.com/install-ubuntu.sh | sh
heroku login
    • Enter your credentials and login.
  • Deploy Yaydoc to Heroku

    • Clone the original yaydoc repository or your own fork
git clone https://github.com/<username>/yaydoc.git
    • Move to the directory of the cloned repository
cd yaydoc/
    • Create a Heroku application using the following command
heroku create <your-app-name>
    • Add buildpacks to the application using the following commands
heroku buildpacks:set heroku/nodejs
heroku buildpacks:add --index 2 heroku/python
heroku buildpacks:add --index 3 https://github.com/imujjwal96/heroku-buildpack-pandoc.git
    • Set Environment Variables using the following commands
heroku config:set CALLBACKURL=https://<your-app-name>.herokuapp.com/callback
heroku config:set CLIENTID=<github-generated>
heroku config:set CLIENTSECRET=<github-generated>
heroku config:set ENCRYPTION_KEY=AVERYSECRETTOKENOFSPECIFICLENGTH
heroku config:set ON_HEROKU=true
heroku config:set PYPANDOC_PANDOC=~/vendor/pandoc/bin/pandoc
heroku config:set SECRET=averysecrettoken
    • Now deploy your code
git push heroku master
    • Visit the app at the URL generated by its app name
heroku open
Continue ReadingSetting up Yaydoc on Heroku

Web User Interface for Yaydoc

Yaydoc consists of two components:

  1. A configuration for various Continuous Integration software including Travis CI among others.
  2. A Web User Interface

Since the initial stage of its development, the team has been focused on developing a `documentation generation` script and a `publish to Github Pages` script. These scripts have been developed and tested by using Travis CI.

We are now at that stage in the development of the project that we can generate the documentation of a project and keep it updated with every push in the Github repository that consists of changes in the documentation. A sample of this can be seen at https://yaydoc.fossasia.org which is a deployment of the documentation of Yaydoc using its own scripts.

After having enough confidence in the working of the script, we have now shifted our inclination towards developing a Web User Interface for the app. The WebUI is intended to perform various functionalities. These include, among others:

  • Generate the documentation and Download the static files in a compressed format.
  • Generate the documentation and make them available for a Preview
  • Generate the documentation and Deploy them to Heroku
  • Generate the documentation and Deploy them to web server using SFTP

NOTE:- The aforementioned functionalities are not exhaustive. Also, they are not certain to be developed if they are not fruitful for the users of Yaydoc. We do not intend to bloat the application with features and functionalities that may never be used.

Technology Stack

The first issue that comes with developing any Web Application is the selection of its technology stack. With a huge number of languages and their web application frameworks, it becomes very difficult to reach a conclusion. After a lot of discussions, NodeJS was selected.

The User Interface involves various technologies including

  1. NodeJS – A JavaScript runtime.
  2. ExpressJS – A minimal and flexible Node.js web application framework.
  3. Pug (ex – Jade) – A high-performance template engine implemented for NodeJS.
  4. Socket.IO – A JavaScript library for realtime web application that enables realtime, bi-directional communication between web clients and servers.

ExpressJS is set up using the express-generator as it prepares a proper minimal architecture which makes it easy to scale up the application. Since the HTML part of the application will be minimal, Pug was chosen as it has a very clean and easy to read syntax. The use of Socket.IO became necessary as the app has a bidirectional communication with the `GENERATE` script sending its log output to the front-end.

Components of the Web User Interface

The UI consists of a Form that asks the user to input

  1. Email address – To provide a unique identity for a user to isolate the documentation
  2. GITURL – URL of the repository which consists the docs to be generated
  3. Doc  Theme – A dropdown that consists of built in Sphinx themes.

Out of the various arguments used to generate documentation in Sphinx, following are assumed

  • AUTHOR – Name of the user/organization of the repository
  • PROJECTNAME – Name of the repository
  • DOCPATH – Documentations are assumed to be stored at “docs/”

Apart from the form, the UI also has a block that is used to display the logs while the bash script is running in the backend.

The components defined above are those that have been developed and are being tested rigorously. Since the app is constantly being developed with new features added almost daily, new components will be added to the User Interface.

Continue ReadingWeb User Interface for Yaydoc