Adding download feature to LoklakWordCloud app on Loklak apps site

One of the most important and useful feature that has recently been added to LoklakWordCloud app is enabling the user to download the generated word cloud as a png/jpeg image. This feature will allow the user to actually use this app as a tool to generate a word cloud using twitter data and save it on their disks for future use.

All that the user needs to do is generate the word cloud, choose an image type (png or jpeg) and click on export as image, a preview of the image to be downloaded will be displayed. Just hit enter and the word cloud will be saved on your disk. Thus users will not have to use any alternative process like taking a screenshot of the word cloud generated, etc.

Presently the complete app is hosted on Loklak apps site.

How does it work?

What we are doing is, we are exporting a part of the page (a div) as image and saving it. Apparently it might seem that we are taking a screenshot of a particular portion of a page and generating a download link. But actually it is not like that. The word cloud that is being generated by this app via Jqcloud is actually a collection of HTML nodes. Each node contains a word (part of the cloud) as a text content with some CSS styles to specify the size and color of that word. As user clicks on export to image option, the app traverses the div containing the cloud. It collects information about all the HTML nodes present under that div and creates a canvas representation of the entire div. So rather than taking a screenshot of the div, the app recreates the entire div and presents it to us. This entire process is accomplished by a lightweight JS library called html2canvas.

Let us have a look into the code that implements the download feature. At first we need to create the UI for the export and download option. User should be able to choose between png and jpeg before exporting to image. For this we have provided a dropdown containing the two options.

<div class="dropdown type" ng-if="download">
                <div class="dropdown-toggle select-type" data-toggle="dropdown">
                  {{imageType}}
                <span class="caret"></span></div>
                <ul class="dropdown-menu">
                  <li ng-click="changeType('png', 'png')"><a href="">png</a></li>
                  <li ng-click="changeType('jpeg', 'jpg')"><a href="">jpeg</a></li>
                </ul>
              </div>
              <a class="export" ng-click="export()" ng-if="download">Export as image</a>

In the above code snippet, firstly we create a dropdown menu with two list items, png and jpeg. With each each list item we attach a ng-click event which calls changeType function and passes two parameters, image type and extension.

The changeType function simply updates the current image type and extension with the selected ones.

$scope.changeType = function(type, ext) {
        $scope.imageType = type;
        $scope.imageExt = ext;
    }

The ‘export as image’ on clicking calls the export function. The export function uses html2canvas library’s interface to generate the canvas representation of the word cloud and also generates the download link and attaches it to the modal’s save button (described below). After everything is done it finally opens a modal with preview image and save option.

$scope.export = function() {
        html2canvas($(".wordcloud"), {
          onrendered: function(canvas) {
            var imgageData = canvas.toDataURL("image/" + $scope.imageType);
            var regex = /^data:image\/jpeg/;
            if ($scope.imageType === "png") {
                regex = /^data:image\/png/;
            }
            var newData = imgageData.replace(regex, "data:application/octet-stream");
            canvas.style.width = "80%";
            $(".wordcloud-canvas").html(canvas);
            $(".save-btn").attr("download", "Wordcloud." + $scope.imageExt).attr("href", newData);
            $("#preview").modal('show');
          },
          background: "#ffffff"
        });
    }

At the very beginning of this function, a call is made to html2canvas module and the div containing the word cloud is passed as a parameter. An object is also passed which contains a callback function defined for onrendered key. Inside the callback function we check the current image type and generate the corresponding url from the canvas. We display this canvas in the modal and set this download url as the href value of the modal’s save button.

Finally we display the modal.

The modal simply contains the preview image and a button to save the image on disk.

A sample image produced by the app is shown below.

Important resources

  • Know more about html2canvas here.
  • Know more about Jqcloud here.
  • View the app source here.
  • View loklak apps site source here.
  • View Loklak API documentation here
  • Learn more about AngularJS here.

Preparing for Automatic Publishing of Android Apps in Play Store

I spent this week searching through libraries and services which provide a way to publish built apks directly through API so that the repositories for Android apps can trigger publishing automatically after each push on master branch. The projects to be auto-deployed are:

I had eyes on fastlane for a couple of months and it came out to be the best solution for the task. The tool not only allows publishing of APK files, but also Play Store listings, screenshots, and changelogs. And that is only a subset of its capabilities bundled in a subservice supply.

There is a process before getting started to use this service, which I will go through step by step in this blog. The process is also outlined in the README of the supply project.

Enabling API Access

The first step in the process is to enable API access in your Play Store Developer account if you haven’t done so. For that, you have to open the Play Dev Console and go to Settings > Developer Account > API access.

If this is the first time you are opening it, you’ll be presented with a confirmation dialog detailing about the ramifications of the action and if you agree to do so. Read carefully about the terms and click accept if you agree with them. Once you do, you’ll be presented with a setting panel like this:

Creating Service Account

As you can see there is no registered service account here and we need to create one. So, click on CREATE SERVICE ACCOUNT button and this dialog will pop up giving you the instructions on how to do so:

So, open the highlighted link in the new tab and Google API Console will open up, which will look something like this:

Click on Create Service Account and fill in these details:

Account Name: Any name you want

Role: Project > Service Account Actor

And then, select Furnish a new private key and select JSON. Click CREATE.

A new JSON key will be created and downloaded on your device. Keep this secret as anyone with access to it can at least change play store listings of your apps if not upload new apps in place of existing ones (as they are protected by signing keys).

Granting Access

Now return to the Play Console tab (we were there in Figure 2 at the start of Creating Service Account), and click done as you have created the Service Account now. And you should see the created service account listed like this:

Now click on grant access, choose Release Manager from Role dropdown, and select these PERMISSIONS:

Of course you don’t want the fastlane API to access financial data or manage orders. Other than that it is up to you on what to allow or disallow. Same choice with expiry date as we have left it to never expire. Click on ADD USER and you’ll see the Release Manager created in the user list like below:

Now you are ready to use the fastlane service, or any other release management service for that matter.

Using fastlane

Install fastlane by

sudo gem install fastlane

Go to your project folder and run

fastlane supply init

First it will ask the location of the private key JSON file you downloaded, and then the package name of the application you are trying to initialize fastlane for.

Then it will create metadata folder with listing information excluding the images. So you’ll have to download and place the images manually for the first time

After modifying the listing, images or APK, run the command:

fastlane supply run

That’s it. Your app along with the store listing has been updated!

This is a very brief introduction to the capabilities of the supply service. All interactive options can be supplied via command line arguments, certain parts of the metadata can be omitted and alpha beta management along with release rollout can be done in steps! Make sure to check out the links below:

Implementing Advanced Search Feature In Susper

Susper has been provided ‘Advanced Search’ feature which provides the user a great experience to search for desired results. Advanced search has been implemented in such a way it shows top authors, top providers, and distribution regarding protocols. Users can choose any of these options to get best results.

We receive data of each facet name from Yacy using yacy search endpoint. More about yacy search endpoint can be found here:  http://yacy.searchlab.eu/solr/select?query=india&fl=last_modified&start=0&rows=15&facet=true&facet.mincount=1&facet.field=host_s&facet.field=url_protocol_s&facet.field=author_sxt&facet.field=collection_sxt&wt=yjson

For implementing this feature, we created Actions and Reducers using concepts of Redux. The implemented actions can be found here: https://github.com/fossasia/susper.com/blob/master/src/app/actions/search.ts

Actions have been implemented because these actually represent some kind of event. For e.g. like the beginning of an API call here.

We also have created an interface for search action which can be found here under reducers as filename index.ts: https://github.com/fossasia/susper.com/blob/master/src/app/reducers/index.ts

Reducers are a pure type of function that takes the previous state and an action and returns the next state. We have used Redux to implement actions and reducers for the advanced search.

For advanced search, the reducer file can be found here: https://github.com/fossasia/susper.com/blob/master/src/app/reducers/search.ts

The main logic has been implemented under advancedsearch.component.ts:

export class AdvancedsearchComponent implements OnInit {
  querylook = {}; // array of urls
  navigation$: Observable<any>;
  selectedelements: Array<any> = []; // selected urls by user
changeurl
(modifier, element) {
// based on query urls are fetched
// if an url is selected by user, it is decoded
  this.querylook[‘query’] = this.querylook[‘query’] + ‘+’ + decodeURIComponent(modifier);
  this.selectedelements.push(element);
// according to selected urls
// results are loaded from yacy
  this.route.navigate([‘/search’], {queryParams: this.querylook});
}

// same method is implemented for removing an url
removeurl(modifier) {
  this.querylook[‘query’] = this.querylook[‘query’].replace(‘+’ + decodeURIComponent(modifier), );

  this.route.navigate([‘/search’], {queryParams: this.querylook});
}

 

The changeurl() function replaces the query with a query and selected URL and searches for the results only from the URL provider. The removeurl() function removes URL from the query and works as a normal search, searching for the results from all providers.

The source code for the implementation of advanced search feature can be found here: https://github.com/fossasia/susper.com/tree/master/src/app/advancedsearch

Resources

Using Firebase Test Lab for Testing test cases of Phimpme Android

As now we started writing some test cases for Phimpme Android. While running my instrumentation test case, I saw a tab of Cloud Testing in Android Studio. This is for Firebase Test Lab. Firebase Test Lab provides cloud-based infrastructure for testing Android apps. Everyone doesn’t have every devices of all the android versions. But testing on all of them is equally important.

How I used test lab in Phimpme

  • Run your first test on Firebase

Select Test Lab in your project on the left nav on the Firebase console, and then click Run a Robo test. The Robo test automatically explores your app on wide array of devices to find defects and report any crashes that occur. It doesn’t require you to write test cases. All you need is the app’s APK. Nothing else is needed to use Robo test.

Upload your Application’s APK (app-debug-unaligned.apk) in the next screen and click Continue

Configure the device selection, a wide range of devices and all API levels are present there. You can save the template for future use.

Click on start test to start testing. It will start the tests and show the real time progress as well.

  • Using Firebase Test Lab from Android Studio

It required Android Studio 2.0+. You needs to edit the configuration of Android Instrumentation test.

Select the Firebase Test Lab Device Matrix under the Target. You can configure Matrix, matrix is actually on what virtual and physical devices do you want to run your test. See the below screenshot for details.

Note: You need to enable the firebase in your project

So using test lab on firebase we can easily test the test cases on multiple devices and make our app more scalable.

Resources:

Create Event by Importing JSON files in Open Event Server

Apart from the usual way of creating an event in  FOSSASIA’s Orga Server project by using POST requests in Events API, another way of creating events is importing a zip file which is an archive of multiple JSON files. This way you can create a large event like FOSSASIA with lots of data related to sessions, speakers, microlocations, sponsors just by uploading JSON files to the system. Sample JSON file can be found in the open-event project of FOSSASIA. The basic workflow of importing an event and how it works is as follows:

  • First step is similar to uploading files to the server. We need to send a POST request with a multipart form data with the zipped archive containing the JSON files.
  • The POST request starts a celery task to start importing data from JSON files and storing them in the database.
  • The celery task URL is returned as a response to the POST request. You can use this celery task for polling purposes to get the status. If the status is FAILURE, we get the error text along with it. If status is SUCCESS we get the resulting event data
  • In the celery task, each JSON file is read separately and the data is stored in the db with the proper relations.
  • Sending a GET request to the above mentioned celery task, after the task has been completed returns the event id along with the event URL.

Let’s see how each of these points work in the background.

Uploading ZIP containing JSON Files

For uploading a zip archive instead of sending a JSON data in the POST request we send a multipart form data. The multipart/form-data format of sending data allows an entire file to be sent as a data in the POST request along with the relevant file informations. One can know about various form content types here .

An example cURL request looks something like this:

curl -H "Authorization: JWT <access token>" -X POST -F 'file=@event1.zip' http://localhost:5000/v1/events/import/json

The above cURL request uploads a file event1.zip from your current directory with the key as ‘file’ to the endpoint /v1/events/import/json. The user uploading the feels needs to have a JWT authentication key or in other words be logged in to the system as it is necessary to create an event.

@import_routes.route('/events/import/<string:source_type>', methods=['POST'])
@jwt_required()
def import_event(source_type):
    if source_type == 'json':
        file_path = get_file_from_request(['zip'])
    else:
        file_path = None
        abort(404)
    from helpers.tasks import import_event_task
    task = import_event_task.delay(email=current_identity.email, file=file_path,
                                   source_type=source_type, creator_id=current_identity.id)
    # create import job
    create_import_job(task.id)

    # if testing
    if current_app.config.get('CELERY_ALWAYS_EAGER'):
        TASK_RESULTS[task.id] = {
            'result': task.get(),
            'state': task.state
        }
    return jsonify(
        task_url=url_for('tasks.celery_task', task_id=task.id)
    )


After the request is received we check if a file exists in the key ‘file’ of the form-data. If it is there, we save the file and get the path to the saved file. Then we send this path over to the celery task and run the task with the
.delay() function of celery. After the celery task is started, the corresponding data about the import job is stored in the database for future debugging and logging purposes. After this we return the task url for the celery task that we started.

Celery Task to Import Data

Just like exporting of event, importing is also a time consuming task and we don’t want other application requests to be paused because of this task. Hence, we use a celery queue to execute this task. Whenever an import task is started, it is added to the celery queue. When it comes to the front of the queue it is executed.

For importing, we have created a celery task, import.event which calls the import_event_task_base() function that uses the import helper functions to get the data from JSON files imported and saved in the DB. After the task is completed, we update the import job data in the table with the status as either SUCCESS or FAILURE depending on the outcome of the celery task.

As a result of the celery task, the newly created event’s id and the frontend link from where we can visit the url is returned. This along with the status of the celery task is returned as the response for a GET request on the celery task. If the celery task fails, then the state is changed to FAILURE and the error which the celery faced is returned as the error message in the result key. We also print an error traceback in the celery worker.

@celery.task(base=RequestContextTask, name='import.event', bind=True, throws=(BaseError,))
def import_event_task(self, file, source_type, creator_id):
    """Import Event Task"""
    task_id = self.request.id.__str__()  # str(async result)
    try:
        result = import_event_task_base(self, file, source_type, creator_id)
        update_import_job(task_id, result['id'], 'SUCCESS')
        # return item
    except BaseError as e:
        print(traceback.format_exc())
        update_import_job(task_id, e.message, e.status if hasattr(e, 'status') else 'failure')
        result = {'__error': True, 'result': e.to_dict()}
    except Exception as e:
        print(traceback.format_exc())
        update_import_job(task_id, e.message, e.status if hasattr(e, 'status') else 'failure')
        result = {'__error': True, 'result': ServerError().to_dict()}
    # send email
    send_import_mail(task_id, result)
    # return result
    return result

 

Save Data from JSON

In import helpers, we have the functions which perform the main task of reading the JSON files, creating sqlalchemy model objects from them and saving them in the database. There are few global dictionaries which help maintain the order in which the files are to be imported and saved and also the file vs model mapping. The first JSON file to be imported is the event JSON file. Since all the other tables to be imported are related to the event table so first we read the event JSON file. After that the order in which the files are read is as follows:

  1. SocialLink
  2. CustomForms
  3. Microlocation
  4. Sponsor
  5. Speaker
  6. Track
  7. SessionType
  8. Session

This order helps maintain the foreign constraints. For importing data from these files we use the function create_service_from_json(). It sorts the elements in the data list  based on the key “id”. It then loops over all the elements or dictionaries contained in the data list. In each iteration delete the unnecessary key-value pairs from the dictionary. Then set the event_id for that element to the id of the newly created event from import instead of the old id present in the data.  After all this is done, create a model object based on the mapping with the filename with the dict data. Then save that model data into the database.

def create_service_from_json(task_handle, data, srv, event_id, service_ids=None):
    """
    Given :data as json, create the service on server
    :service_ids are the mapping of ids of already created services.
        Used for mapping old ids to new
    """
    if service_ids is None:
        service_ids = {}
    global CUR_ID
    # sort by id
    data.sort(key=lambda k: k['id'])
    ids = {}
    ct = 0
    total = len(data)
    # start creating
    for obj in data:
        # update status
        ct += 1
        update_state(task_handle, 'Importing %s (%d/%d)' % (srv[0], ct, total))
        # trim id field
        old_id, obj = _trim_id(obj)
        CUR_ID = old_id
        # delete not needed fields
        obj = _delete_fields(srv, obj)
        # related
        obj = _fix_related_fields(srv, obj, service_ids)
        obj['event_id'] = event_id
        # create object
        new_obj = srv[1](**obj)
        db.session.add(new_obj)
        db.session.commit()
        ids[old_id] = new_obj.id
        # add uploads to queue
        _upload_media_queue(srv, new_obj)

    return ids


After the data has been saved, the next thing to do is upload all the media files to the server. This we do using the
_upload_media_queue()  function. It takes paths to upload the files to from the storage.py helper file for APIs. Then it uploads the files using the various helper functions to the static data storage services like AWS S3, Google storage, etc.

Other than this, the import helpers also contains the function to create an import job that keeps a record of all the imports along with the task url and the user id of the user who started the importing task. It also stores the status of the task. Then there is the get_file_from_request()  function which saves the file that is uploaded through the POST request and returns the path to that file.

Get Response about Event Imported

The POST request returns a task url of the form /v1/tasks/ebe07632-392b-4ae9-8501-87ac27258ce5. To get the final result, you need to keep polling this URL. To know more about polling read my previous blog about exporting event or visit this link. So when the task is completed you would get a “result” key along with the status. The state can either be SUCCESS or FAILURE. If it is a FAILURE you will get a corresponding error message due to which the celery task failed. If it is a success then you get data related to the corresponding event that was created because of import. The data returned are the event id, event name and the event url which you can use to visit the event from the frontend. This data is also sent to the user as an email and notification.

An example response looks something like this:

{ 
    “result”: {
“event_name” : “FOSSASIA 2016”,
     “id” : “24”,
     “url” : “https://eventyay.com/events/ab3de6
},
    “state” : “SUCCESS”
}

The corresponding event name and the url is also sent to the user who started the import task. From the frontend, one can use the object value of the result to show the name of the event that is imported along with providing the event url. Since the id and identifier are both present in the result returned one can also make use of them to send GET, PATCH and other API requests to the events/ endpoint and get the corresponding relationship urls from it to query the other APIs. Thus, the entire data that is imported gets available to the frontend as well.

 

Reference Links:

 

Using Custom Forms In Open Event API Server

One feature of the  Open Event management system is the ability to add a custom form for an event. The nextgen API Server exposes endpoints to view, edit and delete forms and form-fields. This blogpost describes how to use a custom-form in Open Event API Server.

Custom forms allow the event organizer to make a personalized forms for his/her event. The form object includes an identifier set by the user, and the form itself in the form of a string. The user can also set the type for the form which can be either of text or checkbox depending on the user needs. There are other fields as well, which are abstracted. These fields include:

  • id : auto generated unique identifier for the form
  • event_id : id of the event with which the form is associated
  • is_required : If the form is required
  • is_included : if the form is to be included
  • is_fixed : if the form is fixedThe last three of these fields are boolean fields and provide the user with better control over forms use-cases in the event management.

Only the event organizer has permissions to edit or delete these forms, while any user who is logged in to eventyay.com can see the fields available for a custom form for an event.

To create a custom-form for event with id=1, the following request is to be made:
POST  https://api.eventyay.com/v1/events/1/custom-forms?sort=type&filter=[]

with all the above described fields to be included in the request body.  For example:

{
 "data": {
   "type": "custom_form",
   "attributes": {
     "form": "form",
     "type": "text",
     "field-identifier": "abc123",
     "is-required": "true",
     "is-included": "false",
     "is-fixed": "false"
   }
 }
}

The API returns the custom form object along with the event relationships and other self and related links. To see what the response looks like exactly, please check the sample here.

Now that we have created a form, any user can get the fields for the same. But let’s say that the event organiser wants to update some field or some other attribute for the form, he can make the following request along with the custom-form id.

PATCH https://api.eventyay.com/v1/custom-forms/1

(Note: custom-form id must be included in both the URL as well as request body)

Similarly, to delete the form,
DELETE https://api.eventyay.com/v1/custom-forms/1     can be used.

Resources

Skill Editor in SUSI Skill CMS

SUSI Skill CMS is a web application built on ReactJS framework for creating and editing SUSI skills easily. It follows an API centric approach where the SUSI Server acts as an API server. In this blogpost we will see how to add a component which can be used to create a new skill SUSI Skill CMS.

For creating any skill in SUSI we need four parameters i.e model, group, language, skill name. So we need to ask these 4 parameters from the user. For input purposes we have a common card component which has dropdowns for selecting models, groups and languages, and a text field for skill name input.

<SelectField
    floatingLabelText="Model"
    value={this.state.modelValue}
    onChange={this.handleModelChange}
>
    {models}
</SelectField>
<SelectField
    floatingLabelText="Group"
    value={this.state.groupValue}
    onChange={this.handleGroupChange}
>
    {groups}
</SelectField>
<SelectField
    floatingLabelText="Language"
    value={this.state.languageValue}
    onChange={this.handleLanguageChange}
>
    {languages}
</SelectField>
<TextField
    floatingLabelText="Enter Skill name"
    floatingLabelFixed={true}
    hintText="My SUSI Skill"
    onChange={this.handleExpertChange}
/>
<RaisedButton label="Save" backgroundColor="#4285f4" labelColor="#fff" style={{marginLeft:10}} onTouchTap={this.saveClick} />

This is the card component where we get the user input. We have API endpoints on SUSI Server for getting the list of models, groups and languages. Using those APIs we inflate the dropdowns.
Then the user needs to edit the skill. For editing of skills we have used Ace Editor. Ace is an code
editor written in pure javascript. It matches the features native editors like Sublime and TextMate.

To use Ace we need to install the component.

npm install react-ace --save                        

This command will install the dependency and update the package.json file in our project with this dependency.

To use this editor we need to import AceEditor and place it in the render function of our react class.

<AceEditor
    mode=" markup"
    theme={this.state.editorTheme}
    width="100%"
    fontSize={this.state.fontSizeCode}
    height= "400px"
    value={this.state.code}
    name="skill_code_editor"
    onChange={this.onChange}
    editorProps={{$blockScrolling: true}}
/>

Now we have a page that looks something like this

Now we need to handle the click event when a user clicks on the save button.

First we check if the user is logged in or not. For this we check if we have the required cookies and the access token of the user.

 if(!cookies.get('loggedIn')) {
            notification.open({
                message: 'Not logged In',
                description: 'Please login and then try to create/edit a skill',
                icon: <Icon type="close-circle" style={{ color: '#f44336' }} />,
            });
            return 0;
        }

If the user is not logged in then we show him a error notification and asks him to login.

Then we check if he has filled all the required fields like name of the skill etc. and after that we call an API Endpoint on SUSI Server that will finally store the skill in the skill_data_repo.

let url= “http://api.susi.ai/cms/modifySkill.json”
$.ajax({
    url:url,
    dataType: 'jsonp',
    jsonp: 'callback',
    crossDomain: true,
    success: function (data) {
        console.log(data);
        if(data.accepted===true){
            notification.open({
                message: 'Accepted',
                description: 'Your Skill has been uploaded to the server',
                icon: <Icon type="check-circle" style={{ color: '#00C853' }} />,
            });
           }
    }
});

In the success function of ajax call we check if accepted parameter is true from the server or not. If accepted is true then we show user a notification with a message that “Your Skill has been uploaded to the server”.

To see this component running please visit http://skills.susi.ai/skillEditor.

Resources

Material-UI: http://www.material-ui.com/

Ace Editor: https://github.com/securingsincity/react-ace

Ajax: http://api.jquery.com/jquery.ajax/

Universal Cookies: https://www.npmjs.com/package/universal-cookie

Uploading Images to SUSI Server

SUSI Skill CMS is a web app to create and modify SUSI Skills. It needs API Endpoints to function and SUSI Server makes it possible. In this blogpost, we will see how to add a servlet to SUSI Server to upload images and files.

The CreateSkillService.java file is the servlet which handles the process of creating new Skills. It requires different user roles to be implemented and hence it extends the AbstractAPIHandler.

Image upload is only possible via a POST request so we will first override the doPost method in this servlet.

  @Override
  protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
  resp.setHeader("Access-Control-Allow-Origin", "*"); // enable CORS

resp.setHeader enables the CORS for the servlet. This is required as POST requests must have CORS enables from the server. This is an important security feature that is provided by the browser.

        Part file = req.getPart("image");
        if (file == null) {
            json.put("accepted", false);
            json.put("message", "Image not given");
        }

Image upload to servers is usually a Multipart Request. So we get the part which is named as “image” in the form data.

When we receive the image file, then we check if the image with the same name exists on the server or not.

Path p = Paths.get(language + File.separator + “images/” + image_name);

        if (image_name == null || Files.exists(p)) {
                json.put("accepted", false);
                json.put("message", "The Image name not given or Image with same name is already present ");
            }

If the same file is present on the server then we return an error to the user requesting to give a unique filename to upload.

Image image = ImageIO.read(filecontent);
BufferedImage bi = this.createResizedCopy(image, 512, 512, true);
if(!Files.exists(Paths.get(language.getPath() + File.separator + "images"))){
   new File(language.getPath() + File.separator + "images").mkdirs();
           }
ImageIO.write(bi, "jpg", new File(language.getPath() + File.separator + "images/" + image_name));

Then we read the content for the image in an Image object. Then we check if images directory exists or not. If there is no image directory in the skill path specified then create a folder named “images”.

We usually prefer square images at the Skill CMS. So we create a resized copy of the image of 512×512 dimensions and save that copy to the directory we created above.

BufferedImage createResizedCopy(Image originalImage, int scaledWidth, int scaledHeight, boolean preserveAlpha) {
        int imageType = preserveAlpha ? BufferedImage.TYPE_INT_RGB : BufferedImage.TYPE_INT_ARGB;
        BufferedImage scaledBI = new BufferedImage(scaledWidth, scaledHeight, imageType);
        Graphics2D g = scaledBI.createGraphics();
        if (preserveAlpha) {
            g.setComposite(AlphaComposite.Src);
        }
        g.drawImage(originalImage, 0, 0, scaledWidth, scaledHeight, null);
        g.dispose();
        return scaledBI;
    }

The function above is used to create a  resized copy of the image of specified dimensions. If the image was a PNG then it also preserves the transparency of the image while creating a copy.

Since the SUSI server follows an API centric approach, all servlets respond in JSON.

       resp.setContentType("application/json");
       resp.setCharacterEncoding("UTF-8");
       resp.getWriter().write(json.toString());’

At last, we set the character encoding and the character set of the output. This helps the clients to parse the data easily.

To see this endpoint in live send a POST request at http://api.susi.ai/cms/createSkill.json.

Resources

Apache Docs: https://commons.apache.org/proper/commons-fileupload/using.html

Multipart POST Request Tutorial: http://www.codejava.net/java-se/networking/upload-files-by-sending-multipart-request-programmatically

Java File Upload tutorial: https://ursaj.com/upload-files-in-java-with-servlet-api

Jetty Project: https://github.com/jetty-project/