Reset Password Functionality in SUSI iOS

Reset Password as the name suggests is one of the features in the SUSI iOS app which allows a user to change his/her password when they are logged in. This feature was added because a user would want to change his password sometimes to prevent unauthorized access or make his account security stronger. We can find many popular apps online such as Facebook, Gmail, which allow the user to reset their password. The way this is done is pretty simple and all we need from the user is his current and the new password he/she wants to set. In this blog post, I am going to explain step by step how this is implemented in the iOS client.

Implementation

The option to Reset Password is provided to the user under the Settings Controller. On selecting the row, the user is presented with another view which asks the user for his/her current password, new password, and another field to confirm the newly entered password.

First, the user needs to provide his current password followed by the new password. The user’s current password is required just to authenticate that the account’s owner is requesting the password change. The new password field is followed by another field called confirm password just to make sure there isn’t any typo.

Now when the field is filled, the user clicks the `Reset password` button at the bottom. What happens here is, first, the fields are validated to ensure the correct length of the passwords followed by an API request to update the same. The endpoint for the same is as below:

http://api.susi.ai/aaa/changepassword.json?changepassword=user_email&password=current _pass&newpassword=new_pass&access_token=user_access_token

This endpoint requires 3 things:

  • Current Password
  • New Password
  • User’s email
  • Access Token obtained at the time of login
func validatePassword() -> [Bool:String] {
        if let newPassword = newPasswordField.text,
            let confirmPassword = confirmPasswordField.text {
            if newPassword.characters.count > 5 {
                if newPassword == confirmPassword {
                    return [true: ""]
                } else {
                    return [false: ControllerConstants.passwordDoNotMatch]
                }
            } else {
                return [false: ControllerConstants.passwordLengthShort]
            }
        }
        return [false: Client.ResponseMessages.ServerError]
    }

Initially, we were not saving the user’s email, so we added the user’s email to the User’s object which is saved at the time of login.

if var userData = results {
userData[Client.UserKeys.EmailOfAccount] = user.email
UserDefaults.standard.set(userData, forKey: ControllerConstants.UserDefaultsKeys.user)
self.saveUserGlobally(user: currentUser)
}

At last, the API call is made which is implemented as below:

let params = [
  Client.UserKeys.AccessToken: user.accessToken,
  Client.UserKeys.EmailOfAccount: user.emailID,
  Client.UserKeys.Password: currentPasswordField.text ?? "",
  Client.UserKeys.NewPassword: newPasswordField.text ?? ""
]
Client.sharedInstance.resetPassword(params as [String : AnyObject], { (_, message) in
  DispatchQueue.main.async {
    self.view.makeToast(message)
    self.setUIActive(active: false)
  }
})

Below is the final UI.

Reference

Continue ReadingReset Password Functionality in SUSI iOS

Filtering List with Search Manager in Connfa Android App

It is a good practice to provide the facility to filter lists in Android apps to improve the user experience. It often becomes very unpleasing to scroll through the entire list when you want to reach a certain data point. Recently I modified Connfa app to read the list of speakers from the Open Event Format. In this blog I describe how to add filtering facility in lists with Search Manager.

First, we declare the search menu so that the widget appears in it.

<?xml version="1.0" encoding="utf-8"?>
<menu xmlns:android="http://schemas.android.com/apk/res/android">
    <item android:id="@+id/search"
        android:title="Search"
        android:icon="@drawable/search"
        android:showAsAction="collapseActionView ifRoom"
        android:actionViewClass="android.widget.SearchView" />
</menu>

In above menu item the collapseActionView attribute allows your SearchView to expand to take up the whole action bar and collapse back down into a normal action bar item when not in use. Now we create the SearchableConfiguration which defines how SearchView behaves.

<?xml version="1.0" encoding="utf-8"?>
<searchable
    xmlns:android="http://schemas.android.com/apk/res/android"
    android:label="@string/app_name"
    android:hint="Search friend">
</searchable>

Also add this to the activity that will be used with <meta-data> tag in the manifest file. Then associate searchable configuration with the SearchView in the activity class

@Override
public boolean onCreateOptionsMenu(Menu menu) {
    MenuInflater inflater = getMenuInflater();
    inflater.inflate(R.menu.search_menu, menu);

    SearchManager searchManager = (SearchManager)
                            getSystemService(Context.SEARCH_SERVICE);
    searchMenuItem = menu.findItem(R.id.search);
    searchView = (SearchView) searchMenuItem.getActionView();

    searchView.setSearchableInfo(searchManager.
                            getSearchableInfo(getComponentName()));
    searchView.setSubmitButtonEnabled(true);
    searchView.setOnQueryTextListener(this);

    return true;
}

Implement SearchView.OnQueryTextListener in activity, need to override two new methods now

@Override
public boolean onQueryTextSubmit(String searchText) {
  
  return true;
}

@Override
public boolean onQueryTextChange(String searchedText) {

   if (mSpeakersAdapter != null) {
       lastSearchRequest = searchedText;
       mSpeakersAdapter.getFilter().filter(searchedText);
   }
   return true;
}

Find the complete implementation here. In the end it will look like this,

 

References

Android Search View documentation – https://developer.android.com/reference/android/widget/SearchView.html

Continue ReadingFiltering List with Search Manager in Connfa Android App

Post Payment Charging in Open Event API Server

Order flow in Open Event API Server follows very simple process. On successfully creating a order through API server the user receives the payment-url. API server out of the box provides support for two payment gateways, stripe and paypal.
The process followed is very simple, on creating the order you will receive the payment-url. The frontend will complete the payment through that url and on completion it will hit the specific endpoint which will confirm the payment and update the order status.

Getting the payment-url

Payment Url will be sent on successfully creating the order. There are three type of payment-modes which can be provided to Order API on creating order. The three payment modes are “free”, “stripe” and “paypal”. To get the payment-url just send the payment mode as stripe or paypal.

POST Payment

After payment processing through frontend, the charges endpoint will be called so that payment verification can be done on the server end.

POST /v1/orders/<identifier>/charge

This endpoint receives the stripe token if the payment mode is stripe else no token is required to process payment for paypal.
The response will have the order details on successful verification.

Implementation

The implementation of charging is based on the custom data layer in Orga Server. The custom layer overrides the Base data layer and provide the custom implementation to “create_object” method thus, not using Alchemy layer.

def create_object(self, data, view_kwargs):
   order = Order.query.filter_by(id=view_kwargs['id']).first()
   if order.payment_mode == 'stripe':
       if data.get('stripe') is None:
           raise UnprocessableEntity({'source': ''}, "stripe token is missing")
       success, response = TicketingManager.charge_stripe_order_payment(order, data['stripe'])
       if not success:
           raise UnprocessableEntity({'source': 'stripe_token_id'}, response)

   elif order.payment_mode == 'paypal':
       success, response = TicketingManager.charge_paypal_order_payment(order)
       if not success:
           raise UnprocessableEntity({'source': ''}, response)
   return order

With the resource class as

class ChargeList(ResourceList):
   methods = ['POST', ]
   schema = ChargeSchema

   data_layer = {
       'class': ChargesLayer,
       'session': db.session
   }

Resources

  1. Paypal Payments API
    https://developer.paypal.com/docs/api/payments/
  2. Flask-json-api custom layer docs
    http://flask-rest-jsonapi.readthedocs.io/en/latest/data_layer.html#custom-data-layer
  3. Stripe Payments API
    https://stripe.com/docs/charges
Continue ReadingPost Payment Charging in Open Event API Server

Backend Scraping in Loklak Server

Loklak Server is a peer-to-peer Distributed Scraping System. It scrapes data from websites and also maintain other sources like peers, storage and a backend server to scrape data. Maintaining different sources has it’s benefits of not engaging in costly requests to the websites, no scraping of data and no cleaning of data.

Loklak Server can maintain a secondary Loklak Server (or a backend server) tuned for storing large amount of data. This enables the primary Loklak Server fetch data in return of pushing all scraped data to the backend.

Lately there was a bug in backend search as a new feature of filtering tweets was added to scraping and indexing, but not for backend search. To fix this issue, I had backtracked the backend search codebase and fix it.

Let us discuss how Backend Search works:-

1) When query is made from search endpoint with:

a) source=all

When source is set to all. The first TwitterScraper and Messages from local search server is preferred. If the messages scraped are not enough or no output has been returned for a specific amount of time, then, backend search is initiated

b) source=backend

SearchServlet specifically scrapes directly from backend server.

2) Fetching data from Backend Server

The input parameters fetched from the client is feeded into DAO.searchBackend method. The list of backend servers fetched from config file. Now using these input parameters and backend servers, the required data is scraped and output to the client.

In DAO.searchOnOtherPeers method. the request is sent to multiple servers and they are arranged in order of better response rates. This method invokes SearchServlet.search method for sending request to the mentioned servers.

List<String> remote = getBackendPeers();
if (remote.size() > 0) {
    // condition deactivated because we need always at least one peer
    Timeline tt = searchOnOtherPeers(remote, q, filterList, order, count, timezoneOffset, where, SearchServlet.backend_hash, timeout);
    if (tt != null) tt.writeToIndex();
    return tt;
}

 

3) Creation of request url and sending requests

The request url is created according to the input parameters passed to SearchServlet.search method. Here the search url is created according to input parameters and request is sent to the respective servers to fetch the required messages.

   // URL creation
    urlstring = protocolhostportstub + "/api/search.json?q="
           + URLEncoder.encode(query.replace(' ', '+'), "UTF-8") + "&timezoneOffset="
           + timezoneOffset + "&maximumRecords=" + count + "&source="
           + (source == null ? "all" : source) + "&minified=true&shortlink=false&timeout="
           + timeout;
    if(!"".equals(filterString = String.join(", ", filterList))) {
       urlstring = urlstring + "&filter=" + filterString;
    }
    // Download data
    byte[] jsonb = ClientConnection.downloadPeer(urlstring);
    if (jsonb == null || jsonb.length == 0) throw new IOException("empty content from " + protocolhostportstub);
    String jsons = UTF8.String(jsonb);
    JSONObject json = new JSONObject(jsons);
    if (json == null || json.length() == 0) return tl;
    // Final data fetched to be returned
    JSONArray statuses = json.getJSONArray("statuses");

References

Continue ReadingBackend Scraping in Loklak Server

Configuring Youtube Scraper with Search Endpoint in Loklak Server

Youtube Scraper is one of the interesting web scrapers of Loklak Server with unique implementation of its data scraping and data key creation (using RDF). It couldn’t be accessed as it didn’t have any url endpoint. I configured it to use both as separate endpoint (api/youtubescraper) and search endpoint (/api/search.json).

Usage:

  1. YoutubeScraper Endpoint: /api/youtubescraperExample:http://api.loklak.org/api/youtubescraper?query=https://www.youtube.com/watch?v=xZ-m55K3FhQ&scraper=youtube
  2. SearchServlet Endpoint: /api/search.json

Example: http://api.loklak.org/api/search.json?query=https://www.youtube.com/watch?v=xZ-m55K3FhQ&scraper=youtube

The configurations added in Loklak Server are:-

1) Endpoint

We can access YoutubeScraper using endpoint /api/youtubescraper endpoint. Like other scrapers, I have used BaseScraper class as superclass for this functionality .

2) PrepareSearchUrl

The prepareSearchUrl method creates youtube search url that is used to scrape Youtube webpage. YoutubeScraper takes url as input. But youtube link could also be a shortened link. That is why, the video id is stored as query. This approach optimizes the scraper and adds the capability to add more scrapers to it.

Currently YoutubeScraper scrapes the video webpages of Youtube, but scrapers for search webpage and channel webpages can also be added.

URIBuilder url = null;
String midUrl = "search/";
    try {
       switch(type) {
           case "search":
               midUrl = "search/";
               url = new URIBuilder(this.baseUrl + midUrl);
               url.addParameter("search_query", this.query);
               break;
           case "video":
               midUrl = "watch/";
               url = new URIBuilder(this.baseUrl + midUrl);
               url.addParameter("v", this.query);
               break;
           case "user":
               midUrl = "channel/";
               url = new URIBuilder(this.baseUrl + midUrl + this.query);
               break;
           default:
               url = new URIBuilder("");
               break;
       }
    } catch (URISyntaxException e) {
       DAO.log("Invalid Url: baseUrl = " + this.baseUrl + ", mid-URL = " + midUrl + "query = " + this.query + "type = " + type);
       return "";
    }

 

3) Get-Data-From-Connection

The getDataFromConnection method is used to fetch Bufferedreader object and input it to scrape method. In YoutubeScraper, this method has been overrided to prevent using default method implementation i.e. use type=all

@Override
public Post getDataFromConnection() throws IOException {
    String url = this.prepareSearchUrl(this.type);
    return getDataFromConnection(url, this.type);
}

 

4) Set scraper parameters input as get-parameters

The Map data-structure of get-parameters fetched by scraper fetches type and query. For URL, the video hash-code is separated from url and then used as query.

this.query = this.getExtraValue("query");
this.query = this.query.substring(this.query.length() - 11);

 

5) Scrape Method

Scrape method runs the different scraper methods (in YoutubeScraper, there is only one), iterate it using PostTimeline and wraps in Post object to the output. This simple function can improve flexibility of scraper to scrape different pages concurrently.

Post out = new Post(true);
Timeline2 postList = new Timeline2(this.order);
postList.addPost(this.parseVideo(br, type, url));
out.put("videos", postList.toArray());

 

References

Continue ReadingConfiguring Youtube Scraper with Search Endpoint in Loklak Server

Adding EditText With Google Input Option While Sharing In Phimpme App

In Phimpme Android App an user can share images on multiple platforms. While sharing we have also included caption option to enter description about the image. That caption can be entered by using keyboard as well Google Voice Input method. So in this post, I will be explaining how to add edittext with google voice input option.

Let’s get started

Step-1 Add EditText and Mic Button in layout file

<ImageView
       android:layout_width="20dp"
       android:id="@+id/button_mic"
       android:layout_height="20dp"
       android:background="?android:attr/selectableItemBackground"
       android:background="@drawable/ic_mic_black"
       android:scaleType="fitCenter" />
</RelativeLayout>


Caption Option in Share Activity in Phimpme

In Phimpme we have material design dialog box so right now I am using getTextInputDialogBox(). It will prompt the material design dialog box to enter caption to share image on multiple platform.

Step-2

Now we can get caption from edittext easily using the following code.

if (!captionText.isEmpty()) {
  caption = captionText;
  text_caption.setText(caption);
  captionEditText.setSelection(caption.length());
} else {
  caption = null;
  text_caption.setText(caption);
}


Step – 3 (Now add option Google Voice input option)

To use google input option I have added a global function in Utils class. To use that method just call that method with proper arguments.

public static void promptSpeechInput(Activity activity, int requestCode, View parentView, String promtMsg) {
  Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
  intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
  intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault());
  intent.putExtra(RecognizerIntent.EXTRA_PROMPT, promtMsg);
  try {
      activity.startActivityForResult(intent, requestCode);
  } catch (ActivityNotFoundException a) {
      SnackBarHandler.show(parentView,activity.getString(R.string.speech_not_supported));
  }

}

Just pass the request code to receive the speech text in onActvityResult() method and passs promt message which will be visible to user.

Step – 4 (Set text to caption )

if (requestCode == REQ_CODE_SPEECH_INPUT && data!=null){
       ArrayList<String> result = data.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
       String voiceInput = result.get(0);
       text_caption.setText(voiceInput);
       caption = voiceInput;
       return;

}

Now we can set the text in caption string right now I am adding text in existing caption i.e If the user enter some text using edittext and after that user clicked on mic button then the extra text will be added after the previous text.

So this is how I have used Google voice input method and made the function globally.

Resources:

Continue ReadingAdding EditText With Google Input Option While Sharing In Phimpme App

Store Log History for Repositories Registered to Yaydoc

Yaydoc, our automatic documentation generation and deployment project, generates and deploys documentation for each of its registered repository. For every commit made to the registered repository, there is a corresponding build process running at Yaydoc. These build processes have their own logs which are stored as text files. However, until now, these commits were never visible to the user. So, if there would have been a failed build process, the user would never know the reason behind, rendering the user unable to rectify the error.

Hence, there was a need to make these logs available to users. The initial thought was to store only the latest log overriding all the previous logs for the repository. However, it was unanimously decided by the developers to store a history of logs for the repository. The main motive behind this was to enable users to compare logs between different commits.

The content from the log files created is stored in a MongoDB collection. Following is the schema defined for the build logs.

const BuildLog = mongoose.model(‘BuildLog’, mongoose.Schema({
  repository: String,  // `full_name` of the repository
  buildNumber: {       // Incrementing number for each build
    type: Number,
    default: 0,
  },
  generate: {
    data: Buffer,      // Generate Logs content
    datetime: Date,    // Date time of generate log creation
  },
  ghpages: {
    data: Buffer,      // Github Pages Logs content
    datetime: Date,    // Date time of github pages log creation
  },
}));

The repository collection is also updated, adding a builds key storing the number of times the build process was triggered for a given repository. This key is incremented on every new build and the new value is stored along with the builds as buildNumber.

The build process involves a documentation generation and a documentation deployment script. The process of incrementing the build number in the repository occurs when we store the documentation generation logs. After that, Github pages logs are stored when the documentation deployment process is completed.

Since the logs are stored in a text file at the location temp/<email>/<filename>.txt, we had to read the file using NodeJS File system. The file is read synchronously using the fs.readFileSync(filename) function and then stored in the MongoDB collection.

/**
 * Store logs created while generating docs for a given repository
 * @param name: `full_name` of the repository
 * @param filepath: file path of the generate logs
 * @param callback
 */
module.exports.storeGenerateLogs = function (name, filepath, callback) {
  Repository.incrementBuildNumber(name, function (error, repository) {
    var buildlog = new BuildLog({
      repository: name,
      buildNumber: repository.builds,
      generate: {
        data: fs.readFileSync(filepath),
        datetime: new Date()
      }
    });
    buildlog.save(function (error, repository) {
      callback(error, repository);
    });
  });
};

/**
 * Store logs created while deploying docs for a given repository
 * @param name: `full_name` of the repository
 * @param filepath: file of the ghpages deploy logs
 * @param callback
 */
module.exports.storeGithubPagesLogs = function (name, filepath, callback) {
  Repository.getRepositoryByName(name, function (error, repository) {
    if (error) {
      callback(error);
    } else {
      BuildLog.getParticularBuildLog(repository.name, repository.builds, 
      function (error, buildLog) {
        buildLog.ghpages.data = fs.readFileSync(filepath);
        buildLog.ghpages.datetime = new Date();
        buildLog.save(function (error, buildLog) {
          callback(error, buildLog);
        });
      });
    }
  });
};

The stored logs can then be retrieved at two different routes, with /:owner/:name/logs showing a list to logs generated in at most 10 builds and /:owner/:name showing the latest log. Similar to logs generated by Travis, accessing these routes doesn’t require the user to login to Yaydoc.

/**
 * Get a single repository with a log history of 10
 * @param name: `full_name` of the repository
 * @param callback
 */
module.exports.getRepositoryWithLogs = function (name, callback) {
  Repository.aggregate([
    { $match: {name: name}},
    {
      $lookup: {
        from: ‘buildLogs’,
        localField: ‘name’,
        foreignField: ‘repository’,
        as: ‘logs’
      }
    },
    { $unwind: ‘$logs’ },
    { $sort: { ‘logs.buildNumber’: -1 } },
    { $limit: 10 }
  ]).exec(function (error, results){
    callback(error, results);
  });
};

In order to retrieve a repository along with its logs, we perform an aggregation in MongoDB which is similar to a Left Join in SQL. This is the $lookup aggregation and it performs a left outer join to an unsharded collection in the same database to filter in documents from the “joined” collection for processing. A similar method is used to retrieve the latest log by setting the limit aggregation to 1.

Resources:

  1. MongoDB Aggregation Lookup: https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/
  2. Mongoose Aggregate Constructor: http://mongoosejs.com/docs/api.html#index_Mongoose-Aggregate
  3. NodeJS File System: https://nodejs.org/api/fs.html#fs_fs_readfilesync_path_options
Continue ReadingStore Log History for Repositories Registered to Yaydoc

Showing sample queries in SUSI.AI Bots

We need to give the user a good start to their chat with SUSI.AI. Engaging the users with some good skills at the start of the conversation, can leave a good impression about SUSI.AI. In SUSI messenger bots, we show up with some sample queries to try, during the conversation with SUSI.AI. In this blog, SUSI_Tweetbot and SUSI_FBbot are used as examples.

These queries are shown as quick replies i.e. the user can click on any of these sample queries and get an answer from SUSI.AI.  

Facebook:

When the user clicks on the “Start chatting” button, we send a descriptive message on what can the user ask to SUSI.AI .

Code snippet used for this step is:

var queryUrl = 'http://api.susi.ai/susi/chat.json?q='+'Start+chatting';
var startMessage = '';
// Wait until done and reply
request({
        url: queryUrl,
        json: true
}, function (error, response, body) {
if (!error && response.statusCode === 200) {
        startMessage = body.answers[0].actions[0].expression;
    }
else{
    startMessage = errMessage;
    }
sendTextMessage(sender, startMessage, 0);

Just a text message is not much engaging. To further enhance the experience of the user, we show some quick reply options to the user. We have finalized some skills to show to the user:

Due to the character limit for the text shown on buttons, we try to show short queries as shown in the above picture. This way the user gets an idea about what type of queries can be asked.

Generic template, help us achieve this feature in SUSI_FBbot.

The code snippet used:

var messageT = {
               "type": "template",
               "payload": {
                "template_type": "generic",
                "elements": [{
                                    "title": 'You can try the following:',
                                    "buttons": [{                                               
                                               "type":"postback",
                                               "title":"What is FOSSASIA?",                                  
                                               "payload":"What is FOSSASIA?"            
                                            }]
                            }]
                }
            };
sendTextMessage(sender, messageT, 1);

As seen in the code above, each button has a corresponding postback text. So that whenever that button is clicked the postback text is sent to our chat automatically:

This postback text acts as a query to SUSI API which fetches the response from the server and shows it back to the user.

Twitter:

As SUSI.AI bots must be generic among all the messenger platforms available , we will inculcate the same skills available in SUSI_FBbot to SUSI_Tweetbot. The quick reply feature provided by Twitter devs help us to accomplish this task at hand.

As in SUSI_FBbot a descriptive message is shown to the users first and then some quick reply options following it.

Message_create event helps in adding quick replies:

var msg = {
               "event": {
               "type": "message_create",
               "message_create": {
                   "target": {
                       "recipient_id": senderId
                    },
                    "message_data": {
                        "text": "You can try the following:",
                        "quick_reply": {
                            "type": "options",
                            "options": [{
                                "label": "What is FOSSASIA?",
                                "metadata": "external_id_4"
                            }]
                        }
                    }
                }
           }
    };
T.post('direct_messages/events/new', msg, sent);

One thing to keep in mind while coding is to send the quick reply message after the initial descriptive message i.e. the code used to send quick replies should be written inside the function, which sends the descriptive message first and aafter that step is complete it runs the code for quick replies. If we accidentally write quick reply code outside that function, it’s highly likely to find bugs in the replies by SUSI.AI.

Resources

  1. Speed up customer service with quick replies and welcome messages by Ian Cairns from Twitter blog.
  2. Link Ads to Messenger, Enhanced Mobile Websites, Payments and More by Seth Rosenberg from Facebook developers blog
Continue ReadingShowing sample queries in SUSI.AI Bots

Live Feeds in loklak Media wall using ‘source=twitter’

Loklak Server provides pagination to provide tweets from Loklak search.json API in divisions so as to improve response time from the server. We will be taking advantage of this pagination using parameter `source=twitter` of the search.json API on loklak media wall. Basically, using parameter ‘source=twitter’ in the API does real time scraping and provides live feeds. To improve response time, it returns feeds as specified in the count (default is 100).

In the blog, I am explaining how implemented real time pagination using ‘source = twitter’ in loklak media wall to get live feeds from twitter.

Working

First API Call on Initialization

The first API call needs to have high count (i.e. maximumRecords = 20) so as to get a higher number of feeds and provide a sufficient amount of feeds to fill up the media wall. ‘source=twitter’ must be specified so that real time feeds are scraped and provided from twitter.

http://api.loklak.org/api/search.json?q=fossasia&callback=__ng_jsonp__.__req0.finished&minified=true&source=twitter&maximumRecords=20&timezoneOffset=-330&startRecord=1

 

If feeds are received from the server, then the next API request must be sent after 10 seconds so that server gets sufficient time to scrap the data and store it in the database. This can be done by an effect which dispatches WallNextPageAction(‘’) keeping debounceTime equal to 10000 so that next request is sent 10 seconds after WallSearchCompleteSuccessAction().

@Effect()
nextWallSearchAction$
= this.actions$
.ofType(apiAction.ActionTypes.WALL_SEARCH_COMPLETE_SUCCESS)
.debounceTime(10000)
.withLatestFrom(this.store$)
.map(([action, state]) => {
return new wallPaginationAction.WallNextPageAction();
});

Consecutive Calls

To implement pagination, next consecutive API call must be made to add new live feeds to the media wall. For the new feeds, count must be kept low so that no heavy pagination takes place and feeds are added one by one to get more focus on new tweets. For this purpose, count must be kept to one.

this.searchServiceConfig.count = queryObject.count;
this.searchServiceConfig.maximumRecords = queryObject.count;return this.apiSearchService.fetchQuery(queryObject.query.queryString, this.searchServiceConfig)
.takeUntil(nextSearch$)
.map(response => {
return new wallPaginationAction.WallPaginationCompleteSuccessAction(response);
})
.catch(() => of(new wallPaginationAction.WallPaginationCompleteFailAction()));
});

 

Here, count and maximumRecords is updated from queryObject.count which varies between 1 to 5 (default being 1). This can be updated by user from the customization menu.

Next API request is as follows:

http://api.loklak.org/api/search.json?q=fossasia&callback=__ng_jsonp__.__req2.finished&minified=true&source=twitter&maximumRecords=1&timezoneOffset=-330&startRecord=1

 

Now, as done above, if some response is received from media wall, next request is sent after 10 seconds after WallPaginationCompleteSuccess() from an effect by keeping debounceTime equal to 10000.

In the similar way, new consecutive calls can be made by keeping ‘source = twitter’ and keeping count low for getting a proper focus on new feed.

Reference

Continue ReadingLive Feeds in loklak Media wall using ‘source=twitter’

Real Time Upload Progress in Phimpme Android

The Phimpme Android application along with a wonderful gallery, edit image and camera section comes in with an option to share the images to different connected accounts. For sharing the images to different accounts, we have made use of different SDK’s provided to help users to share the images to multiple accounts at once without having to install other applications on their devices. When the user connects the account and shares the image to any account, we display a snackbar at the bottom that the upload has started and then we display the progress of the uploads in the notification panel as depicted in the screenshot below.

In this tutorial, I will be explaining how we achieved this feature of displaying the upload progress in the Phimpme Android application using a Notification handler class.

Step 1

The first thing we need to do is to create an AsyncTask that will be handling the upload progress and the notification handling in the background without affecting the main UI of the application. This can be done using the upload progress class which is a subclass of the AsyncTask class as depicted below.

private class UploadProgress extends AsyncTask<Void, Integer, Void> {
}

The AsyncTask overrides three methods which are onPreExecute, doInBackground and onPostExecute methods. In the onPreExecute method we will make the uploading notification visible to the user via the Notification handler class.

Step 2

After this, we need to create a notification handler class which will be handling the uploads progress. We will be needing four methods inside of the Notification handler class to :

  1. Make the app notification in the notification panel.
  2. To update the progress of the upload.
  3. To display the upload failed progress.
  4. To display the upload passed progress.

The notification display can be made using the following lines of code below:

mNotifyManager = (NotificationManager) ActivitySwitchHelper.getContext().getSystemService(Context.NOTIFICATION_SERVICE);
mBuilder = new NotificationCompat.Builder(ActivitySwitchHelper.getContext());
mBuilder.setContentTitle(ActivitySwitchHelper.getContext().getString(R.string.upload_progress))
      .setContentText(ActivitySwitchHelper.getContext().getString(R.string.progress))
      .setSmallIcon(R.drawable.ic_cloud_upload_black_24dp)
      .setOngoing(true);
mBuilder.setProgress(0, 0, true);
// Issues the notification
mNotifyManager.notify(id, mBuilder.build());

The above code makes use of the Android’s NotificationManager class to get the notification service and sets the title and the upload image which is to be displayed to the user at the time of image uploads.

Now we need to update the notification after every each second to display the real time upload progress to the user. This can be done by using the upload progress method which takes in total file size and the amount of data uploaded as a parameter.

public static void updateProgress(int uploaded, int total, int percent){
  mBuilder.setProgress(total, uploaded, false);
  mBuilder.setContentTitle(ActivitySwitchHelper.getContext().getString(R.string.upload_progress)+" ("+Integer.toString(percent)+"%)");
  // Issues the notification
  mNotifyManager.notify(id, mBuilder.build());

The above updating process can be done in the doInBackground task of the AsyncTask described in step 1.

Step 3

After the upload has completed, the onPostExecute method will be executed and in that we need to make display the status whether the upload passed or failed and we need to set the onProgress value of the notification to be false so that user can remove the notification. This can be done using the following line of code below:

mBuilder.setContentText(ActivitySwitchHelper.getContext().getString(R.string.upload_done))
      // Removes the progress bar
      .setProgress(0,0,false)
      .setContentTitle(ActivitySwitchHelper.getContext().getString(R.string.upload_complete))
      .setOngoing(false);
mNotifyManager.notify(0, mBuilder.build());
mNotifyManager.cancel(id);

This is how we have created and made use of the Notification handler class in the Phimpme Application to display the upload progress in the application. To get the full source code for implementing the uploads to multiple accounts and to display the notification, please refer to the Phimpme Android GitHub repository.

Resources

  1. Google Developer’s Guide – Notification Handling – https://developer.android.com/guide/topics/ui/notifiers/notifications.html
  2. Google Developer’s Guide – AsyncTask in Android – https://developer.android.com/reference/android/os/AsyncTask.html
  3. StackOverflow – Notification Handling – https://stackoverflow.com/questions/31063920/how-to-program-android-notification
  4. GitHub – Phimpme Android Repository – https://github.com/fossasia/phimpme-android/
Continue ReadingReal Time Upload Progress in Phimpme Android