Implementing Health Check Endpoint in Open Event Server

A health check endpoint was required in the Open Event Server be used by Kubernetes to know when the web instance is ready to receive requests.

Following are the checks that were our primary focus for health checks:

  • Connection to the database.
  • Ensure sql-alchemy models are inline with the migrations.
  • Connection to celery workers.
  • Connection to redis instance.

Runscope/healthcheck seemed like the way to go for the same. Healthcheck wraps a Flask app object and adds a way to write simple health-check functions that can be used to monitor your application. It’s useful for asserting that your dependencies are up and running and your application can respond to HTTP requests. The Healthcheck functions are exposed via a user defined flask route so you can use an external monitoring application (monit, nagios, Runscope, etc.) to check the status and uptime of your application.

Health check endpoint was implemented at /health-check as following:

from healthcheck import HealthCheck
health = HealthCheck(current_app, "/health-check")

 

Following is the function for checking the connection to the database:

def health_check_db():
   """
   Check health status of db
   :return:
   """
   try:
       db.session.execute('SELECT 1')
       return True, 'database ok'
   except:
       sentry.captureException()
       return False, 'Error connecting to database'

 

Check functions take no arguments and should return a tuple of (bool, str). The boolean is whether or not the check passed. The message is any string or output that should be rendered for this check. Useful for error messages/debugging.

The above function executes a query on the database to check whether it is connected properly. If the query runs successfully, it returns a tuple True, ‘database ok’. sentry.captureException() makes sure that the sentry instance receives a proper exception event with all the information about the exception. If there is an error connecting to the database, the exception will be thrown. The tuple returned in this case will be return False, ‘Error connecting to database’.

Finally to add this to the endpoint:

health.add_check(health_check_db)

Following is the response for a successful health check:

{
   "status": "success",
   "timestamp": 1500915121.52474,
   "hostname": "shubham",
   "results": [
       {
           "output": "database ok",
           "checker": "health_check_db",
           "expires": 1500915148.524729,
           "passed": true,
           "timestamp": 1500915121.524729
       }
   ]
}

If the database is not connected the following error will be shown:

{
           "output": "Error connecting to database",
           "checker": "health_check_db",
           "expires": 1500965798.307425,
           "passed": false,
           "timestamp": 1500965789.307425
}

Related:

Continue ReadingImplementing Health Check Endpoint in Open Event Server

Deploying loklak Server on Kubernetes with External Elasticsearch

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

kubernetes.io

Kubernetes is an awesome cloud platform, which ensures that cloud applications run reliably. It runs automated tests, flawless updates, smart roll out and rollbacks, simple scaling and a lot more.

So as a part of GSoC, I worked on taking the loklak server to Kubernetes on Google Cloud Platform. In this blog post, I will be discussing the approach followed to deploy development branch of loklak on Kubernetes.

New Docker Image

Since Kubernetes deployments work on Docker images, we needed one for the loklak project. The existing image would not be up to the mark for Kubernetes as it contained the declaration of volumes and exposing of ports. So I wrote a new Docker image which could be used in Kubernetes.

The image would simply clone loklak server, build the project and trigger the server as CMD

FROM alpine:latest

ENV LANG=en_US.UTF-8
ENV JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF8

WORKDIR /loklak_server

RUN apk update && apk add openjdk8 git bash && \
    git clone https://github.com/loklak/loklak_server.git /loklak_server && \
    git checkout development && \
    ./gradlew build -x test -x checkstyleTest -x checkstyleMain -x jacocoTestReport && \
    # Some Configurations and Cleanups

CMD ["bin/start.sh", "-Idn"]

[SOURCE]

This image wouldn’t have any volumes or exposed ports and we are now free to configure them in the configuration files (discussed in a later section).

Building and Pushing Docker Image using Travis

To automatically build and push on a commit to the master branch, Travis build is used. In the after_success section, a call to push Docker image is made.

Travis environment variables hold the username and password for Docker hub and are used for logging in –

docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD

[SOURCE]

We needed checks there to ensure that we are on the right branch for the push and we are not handling a pull request –

# Build and push Kubernetes Docker image
KUBERNETES_BRANCH=loklak/loklak_server:latest-kubernetes-$TRAVIS_BRANCH
KUBERNETES_COMMIT=loklak/loklak_server:kubernetes-$TRAVIS_COMMIT
  
if [ "$TRAVIS_BRANCH" == "development" ]; then
    docker build -t loklak_server_kubernetes kubernetes/images/development
    docker tag loklak_server_kubernetes $KUBERNETES_BRANCH
    docker push $KUBERNETES_BRANCH
    docker tag $KUBERNETES_BRANCH $KUBERNETES_COMMIT
    docker push $KUBERNETES_COMMIT
elif [ "$TRAVIS_BRANCH" == "master" ]; then
    # Build and push master
else
    echo "Skipping Kubernetes image push for branch $TRAVIS_BRANCH"
fi

[SOURCE]

Kubernetes Configurations for loklak

Kubernetes cluster can completely be configured using configurations written in YAML format. The deployment of loklak uses the previously built image. Initially, the image tagged as latest-kubernetes-development is used –

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: server
  namespace: web
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: server
    spec:
      containers:
      - name: server
        image: loklak/loklak_server:latest-kubernetes-development
        ...

[SOURCE]

Readiness and Liveness Probes

Probes act as the top level tester for the health of a deployment in Kubernetes. The probes are performed periodically to ensure that things are working fine and appropriate steps are taken if they fail.

When a new image is updated, the older pod still runs and servers the requests. It is replaced by the new ones only when the probes are successful, otherwise, the update is rolled back.

In loklak, the /api/status.json endpoint gives information about status of deployment and hence is a good target for probes –

livenessProbe:
  httpGet:
    path: /api/status.json
    port: 80
  initialDelaySeconds: 30
  timeoutSeconds: 3
readinessProbe:
  httpGet:
    path: /api/status.json
    port: 80
  initialDelaySeconds: 30
  timeoutSeconds: 3

[SOURCE]

These probes are performed periodically and the server is restarted if they fail (non-success HTTP status code or takes more than 3 seconds).

Ports and Volumes

In the configurations, port 80 is exposed as this is where Jetty serves inside loklak –

ports:
- containerPort: 80
  protocol: TCP

[SOURCE]

If we notice, this is the port that we used for running the probes. Since the development branch deployment holds no dumps, we didn’t need to specify any explicit volumes for persistence.

Load Balancer Service

While creating the configurations, a new public IP is assigned to the deployment using Google Cloud Platform’s load balancer. It starts listening on port 80 –

ports:
- containerPort: 80
  protocol: TCP

[SOURCE]

Since this service creates a new public IP, it is recommended not to replace/recreate this services as this would result in the creation of new public IP. Other components can be updated individually.

Kubernetes Configurations for Elasticsearch

To maintain a persistent index, this deployment would require an external Elasticsearch cluster. loklak is able to connect itself to external Elasticsearch cluster by changing a few configurations.

Docker Image and Environment Variables

The image used for Elasticsearch is taken from pires/docker-elasticsearch-kubernetes. It allows easy configuration of properties from environment variables in configurations. Here is a list of configurable variables, but we needed just a few of them to do our task –

image: quay.io/pires/docker-elasticsearch-kubernetes:2.0.0
env:
- name: KUBERNETES_CA_CERTIFICATE_FILE
  value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- name: NAMESPACE
  valueFrom:
    fieldRef:
      fieldPath: metadata.namespace
- name: "CLUSTER_NAME"
  value: "loklakcluster"
- name: "DISCOVERY_SERVICE"
  value: "elasticsearch"
- name: NODE_MASTER
  value: "true"
- name: NODE_DATA
  value: "true"
- name: HTTP_ENABLE
  value: "true"

[SOURCE]

Persistent Index using Persistent Cloud Disk

To make the index last even after the deployment is stopped, we needed a stable place where we could store all that data. Here, Google Compute Engine’s standard persistent disk was used. The disk can be created using GCP web portal or the gcloud CLI.

Before attaching the disk, we need to declare a volume where we could mount it –

volumeMounts:
- mountPath: /data
  name: storage

[SOURCE]

Now that we have a volume, we can simply mount the persistent disk on it –

volumes:
- name: storage
  gcePersistentDisk:
    pdName: data-index-disk
    fsType: ext4

[SOURCE]

Now, whenever we deploy these configurations, we can reuse the previous index.

Exposing Kubernetes to Cluster

The HTTP and transport clients are enabled on port 9200 and 9300 respectively. They can be exposed to the rest of the cluster using the following service –

apiVersion: v1
kind: Service
...
Spec:
  ...
  ports:
  - name: http
    port: 9200
    protocol: TCP
  - name: transport
    port: 9300
    protocol: TCP

[SOURCE]

Once deployed, other deployments can access the cluster API from ports 9200 and 9300.

Connecting loklak to Kubernetes

To connect loklak to external Elasticsearch cluster, TransportClient Java API is used. In order to enable these settings, we simply need to make some changes in configurations.

Since we enable the service named “elasticsearch” in namespace “elasticsearch”, we can access the cluster at address elasticsearch.elasticsearch:9200 (web) and elasticsearch.elasticsearch:9300 (transport).

To confine these changes only to Kubernetes deployment, we can use sed command while building the image (in Dockerfile) –

sed -i.bak 's/^\(elasticsearch_transport.enabled\).*/\1=true/' conf/config.properties && \
sed -i.bak 's/^\(elasticsearch_transport.addresses\).*/\1=elasticsearch.elasticsearch:9300/' conf/config.properties && \

[SOURCE]

Now when we create the deployments in Kubernetes cluster, loklak auto connects to the external elasticsearch index and creates indices if needed.

Verifying persistence of the Elasticsearch Index

In order to see that the data persists, we can completely delete the deployment or even the cluster if we want. Later, when we recreate the deployment, we can see all the messages already present in the index.

I  [2017-07-29 09:42:51,804][INFO ][node                     ] [Hellion] initializing ...
 
I  [2017-07-29 09:42:52,024][INFO ][plugins                  ] [Hellion] loaded [cloud-kubernetes], sites []
 
I  [2017-07-29 09:42:52,055][INFO ][env                      ] [Hellion] using [1] data paths, mounts [[/data (/dev/sdb)]], net usable_space [84.9gb], net total_space [97.9gb], spins? [possibly], types [ext4]
 
I  [2017-07-29 09:42:53,543][INFO ][node                     ] [Hellion] initialized
 
I  [2017-07-29 09:42:53,543][INFO ][node                     ] [Hellion] starting ...
 
I  [2017-07-29 09:42:53,620][INFO ][transport                ] [Hellion] publish_address {10.8.1.13:9300}, bound_addresses {10.8.1.13:9300}
 
I  [2017-07-29 09:42:53,633][INFO ][discovery                ] [Hellion] loklakcluster/cJtXERHETKutq7nujluJvA
 
I  [2017-07-29 09:42:57,866][INFO ][cluster.service          ] [Hellion] new_master {Hellion}{cJtXERHETKutq7nujluJvA}{10.8.1.13}{10.8.1.13:9300}{master=true}, reason: zen-disco-join(elected_as_master, [0] joins received)
 
I  [2017-07-29 09:42:57,955][INFO ][http                     ] [Hellion] publish_address {10.8.1.13:9200}, bound_addresses {10.8.1.13:9200}
 
I  [2017-07-29 09:42:57,955][INFO ][node                     ] [Hellion] started
 
I  [2017-07-29 09:42:58,082][INFO ][gateway                  ] [Hellion] recovered [8] indices into cluster_state

In the last line from the logs, we can see that indices already present on the disk were recovered. Now if we head to the public IP assigned to the cluster, we can see that the message count is restored.

Conclusion

In this blog post, I discussed how we utilised the Kubernetes setup to shift loklak to Google Cloud Platform. The deployment is active and can be accessed from the link provided under wiki section of loklak/loklak_server repo.

I introduced these changes in pull request loklak/loklak_server#1349 with the help of @niranjan94, @uday96 and @chiragw15.

Resources

Continue ReadingDeploying loklak Server on Kubernetes with External Elasticsearch

Caching Elasticsearch Aggregation Results in loklak Server

To provide aggregated data for various classifiers, loklak uses Elasticsearch aggregations. Aggregated data speaks a lot more than a few instances from it can say. But performing aggregations on each request can be very resource consuming. So we needed to come up with a way to reduce this load.

In this post, I will be discussing how I came up with a caching model for the aggregated data from the Elasticsearch index.

Fields to Consider while Caching

At the classifier endpoint, aggregations can be requested based on the following fields –

  • Classifier Name
  • Classifier Classes
  • Countries
  • Start Date
  • End Date

But to cache results, we can ignore cases where we just require a few classes or countries and store aggregations for all of them instead. So the fields that will define the cache to look for will be –

  • Classifier Name
  • Start Date
  • End Date

Type of Cache

The data structure used for caching was Java’s HashMap. It would be used to map a special string key to a special object discussed in a later section.

Key

The key is built using the fields mentioned previously –

private static String getKey(String index, String classifier, String sinceDate, String untilDate) {
    return index + "::::"
        + classifier + "::::"
        + (sinceDate == null ? "" : sinceDate) + "::::"
        + (untilDate == null ? "" : untilDate);
}

[SOURCE]

In this way, we can handle requests where a user makes a request for every class there is without running the expensive aggregation job every time. This is because the key for such requests will be same as we are not considering country and class for this purpose.

Value

The object used as key in the HashMap is a wrapper containing the following –

  1. json – It is a JSONObject containing the actual data.
  2. expiry – It is the expiry of the object in milliseconds.

class JSONObjectWrapper {
    private JSONObject json; 
    private long expiry;
    ... 
}

Timeout

The timeout associated with a cache is defined in the configuration file of the project as “classifierservlet.cache.timeout”. It defaults to 5 minutes and is used to set the eexpiryof a cached JSONObject –

class JSONObjectWrapper {
    ...
    private static long timeout = DAO.getConfig("classifierservlet.cache.timeout", 300000);

    JSONObjectWrapper(JSONObject json) {
        this.json = json;
        this.expiry = System.currentTimeMillis() + timeout;
    }
    ...
}

 

Cache Hit

For searching in the cache, the previously mentioned string is composed from the parameters requested by the user. Checking for a cache hit can be done in the following manner –

String key = getKey(index, classifier, sinceDate, untilDate);
if (cacheMap.keySet().contains(key)) {
    JSONObjectWrapper jw = cacheMap.get(key);
    if (!jw.isExpired()) {
        // Do something with jw
    }
}
// Calculate the aggregations
...

But since jw here would contain all the data, we would need to filter out the classes and countries which are not needed.

Filtering results

For filtering out the parts which do not contain the information requested by the user, we can perform a simple pass and exclude the results that are not needed.

Since the number of fields to filter out, i.e. classes and countries, would not be that high, this process would not be that resource intensive. And at the same time, would save us from requesting heavy aggregation tasks from the user.

Since the data about classes is nested inside the respective country field, we need to perform two level of filtering –

JSONObject retJson = new JSONObject(true);
for (String key : json.keySet()) {
    JSONArray value = filterInnerClasses(json.getJSONArray(key), classes);
    if ("GLOBAL".equals(key) || countries.contains(key)) {
        retJson.put(key, value);
    }
}

Cache Miss

In the case of a cache miss, the helper functions are called from ElasticsearchClient.java to get results. These results are then parsed from HashMap to JSONObject and stored in the cache for future usages.

JSONObject freshCache = getFromElasticsearch(index, classifier, sinceDate, untilDate);
cacheMap.put(key, new JSONObjectWrapper(freshCache));

The getFromElasticsearch method finds all the possible classes and makes a request to the appropriate method in ElasticsearchClient, getting data for all classifiers and all countries.

Conclusion

In this blog post, I discussed the need for caching of aggregations and the way it is achieved in the loklak server. This feature was introduced in pull request loklak/loklak_server#1333 by @singhpratyush (me).

Resources

Continue ReadingCaching Elasticsearch Aggregation Results in loklak Server

Supporting Dasherized Attributes and Query Params in flask-rest jsonapi for Open Event Server

In the Open Event API Server project attributes of the API are dasherized.

What was the need for dasherizing the attributes in the API ?

All the attributes in our database models are separated by underscores i.e first name would be stored as first_name. But most of the API client implementations support dasherized attributes by default. In order to attract third party client implementations in the future and making the API easy to set up for them was the primary reason behind this decision.Also to quote the official json-api spec recommendation for the same:

Member names SHOULD contain only the characters “a-z” (U+0061 to U+007A), “0-9” (U+0030 to U+0039), and the hyphen minus (U+002D HYPHEN-MINUS, “-“) as separator between multiple words.

Note: The dasherized version for first_name will be first-name.

flask-rest-jsonapi is the API framework used by the project. We were able to dasherize the API responses and requests by adding inflect=dasherize to each API schema, where dasherize is the following function:

def dasherize(text):
   return text.replace('_', '-')

 

flask-rest-jsonapi also provides powerful features like the following through query params:

But we observed that the query params were not being dasherized which rendered the above awesome features useless 🙁 . The reason for this was that flask-rest-jsonapi took the query params as-is and search for them in the API schema. As Python variable names cannot contain a dash, naming the attributes with a dash in the internal API schema was out of the question.

For adding dasherizing support to the query params, change in the QueryStringManager located at querystring.py of the framework root are required. A config variable named DASHERIZE_APIwas added to turn this feature on and off.

Following are the changes required for dasherizing query params:

For Sparse Fieldsets in the fields function, replace the following line:

result[key] = [value]
with
if current_app.config['DASHERIZE_API'] is True:
    result[key] = [value.replace('-', '_')]
else:
    result[key] = [value]

 

For sorting, in the sorting function, replace the following line:

field = sort_field.replace('-', '')

with

if current_app.config['DASHERIZE_API'] is True:
   field = sort_field[0].replace('-', '') + sort_field[1:].replace('-', '_')
else:
   field = sort_field[0].replace('-', '') + sort_field[1:]

 

For Include related objects, in include function, replace the following line:

return include_param.split(',') if include_param else []

with

if include_param:
   param_results = []
   for param in include_param.split(','):
       if current_app.config['DASHERIZE_API'] is True:
           param = param.replace('-', '_')
       param_results.append(param)
   return param_results
return []

Related links:

Continue ReadingSupporting Dasherized Attributes and Query Params in flask-rest jsonapi for Open Event Server

Running Dredd Hooks as a Flask App in the Open Event Server

The Open Event Server is based on the micro-framework Flask from its initial phases. After implementing API documentation, we decided to implement the Dredd testing in the Open Event API.

After isolating each request in Dredd testing, the real challenge is now to bind the database engine to the Dredd Hooks. And as we have been using Flask-SQLAlchemy db.Model Baseclass for building all the models and Flask, being a micro framework itself, came to our rescue as we could easily bind the database engine to the Flask app. Conventionally dredd hooks are written in pure Python, but we will running them as a self contained Flask app itself.

How to initialise this flask app in our dredd hooks. The Flask app can be initialised in the before_all hook easily as shown below:

def before_all(transaction):
    app = Flask(__name__)
    app.config.from_object('config.TestingConfig')

 

The database can be binded to the app as follows:

def before_all(transaction):
app = Flask(__name__)
app.config.from_object('config.TestingConfig')
db.init_app(app)
Migrate(app, db)

 

The challenge now is how to bind the application context when applying the database fixtures. In a normal Flask application this can be done as following:

with app.app_context():
#perform your operation

 

While for unit tests in python:

with app.test_request_context():
#perform tests

 

But as all the hooks are separate from each other, Dredd-hooks-python supports idea of a single stash list where you can store all the desired variables(a list or the name stash is not necessary).

The app and db can be added to stash as shown below:

@hooks.before_all
def before_all(transaction):
app = Flask(__name__)
app.config.from_object('config.TestingConfig')
db.init_app(app)
Migrate(app, db)
stash['app'] = app
stash['db'] = db

 

These variables stored in the stash can be used efficiently as below:

@hooks.before_each
def before_each(transaction):
with stash['app'].app_context():
db.engine.execute("drop schema if exists public cascade")
db.engine.execute("create schema public")
db.create_all()

 

and many other such examples.

Related Links:
1. Testing Your API Documentation With Dredd: https://matthewdaly.co.uk/blog/2016/08/08/testing-your-api-documentation-with-dredd/
2. Dredd Tutorial: https://help.apiary.io/api_101/dredd-tutorial/
3. Dredd Docs: http://dredd.readthedocs.io/

Continue ReadingRunning Dredd Hooks as a Flask App in the Open Event Server

Displaying a Comments dialogfragment on a Button Click from the Feed Adapter in the Open Event Android App

Developing the live feed of the event page from Facebook for the Open Event Android App, there were questions how best to display the comments in the feed.  A dialog fragment over the feeds on the click of a button was the most suitable solution. Now the problem was, a dialogfragment can only be called from an app component (eg- fragment or an activity). Therefore, the only challenge which remained was to call the dialogfragment from the adapter over the feed fragment with the corresponding comments of the particular post on a button click.

What is a dialogfragment?

A dialogfragment displays a dialog window, floating on top of its activity’s window. This fragment contains a Dialog object, which it displays as appropriate based on the fragment’s state. Control of the dialog (deciding when to show, hide, dismiss it) should be done through the API here, not with direct calls on the dialog (Developer.Android.com).

Solution

The solution which worked on was to define a adapter callback interface with a onMethodCallback method in the feed adapter class itself with the list of comment items fetched at runtime on the button click of a particular post. The interface had to be implemented by the main activity which housed the feed fragment that would be creating the comments dialogfragment with the passed list of comments.

Implementation

Define an interface adapterCallback with the method onMethodCallback parameterized by the list of comment items in your adapter class.

public interface AdapterCallback {
   void onMethodCallback(List<CommentItem> commentItems);
}

 

Create a constructor of the adapter with the adapterCallback as a parameter. Do not forget to surround it with a try/catch.

public FeedAdapter(Context context, AdapterCallback adapterCallback, List<FeedItem> feedItems) {
     this.mAdapterCallback = adapterCallback;
}

 

On the click of the comments button, call onMethodCallback method with the corresponding comment items of a particular feed.

getComments.setOnClickListener(v -> {
   if(commentItems.size()!=0)
       mAdapterCallback.onMethodCallback(commentItems);
});

 

Finally implement the interface in the activity to display the comments dialog fragment populated with the corresponding comments of a feed post. Pass the comments with the help of arraylist through the bundle.

@Override
public void onMethodCallback(List<CommentItem> commentItems) {
   CommentsDialogFragment newFragment = new CommentsDialogFragment();
   Bundle bundle = new Bundle();
   bundle.putParcelableArrayList(ConstantStrings.FACEBOOK_COMMENTS, new ArrayList<>(commentItems));
   newFragment.setArguments(bundle);
   newFragment.show(fragmentManager, "Comments");
}

 

Conclusion

The comments generated with each feed post in the open event android app does complement the feed well. The pagination is something which is an option in the comments and the feed both however that is something for some other time. Until then, keep coding!

Resources

Continue ReadingDisplaying a Comments dialogfragment on a Button Click from the Feed Adapter in the Open Event Android App

Create an App Widget for Bookmarked Sessions for the Open Event Android App

What is an app widget?

App Widgets are miniature application views that can be embedded in other applications (such as the Home screen) and receive periodic updates. These views are referred to as Widgets in the user interface, and you can publish one with an App Widget provider. – (Android Documentation).

Android widget is an important functionality that any app can take advantage of. It could be used to show important dates, things that the user personalizes on the app etc. In the context of the Open Event Android App, it was necessary to create a bookmark widget for the Android phones so that the user could see his bookmarks on the homescreen itself and need not open the app for the same. In the open event android app, the widget was already created but it needed bug fixes and UI enhancements due to migration to the Realm database migration. Therefore, my majority of work circled around that.

Implementation

Declare the app widget in the manifest. All the updates in the application would be received by the class which extends the AppWidgetProvider if it needs to be reflected in the widget.

<receiver
   android:name=".widget.BookmarkWidgetProvider"
   android:enabled="true"
   android:label="Bookmarks">
   <intent-filter>
       <action android:name="android.appwidget.action.APPWIDGET_UPDATE" />
       <action android:name="${applicationId}.ACTION_DATA_UPDATED" />
       <action android:name="${applicationId}.UPDATE_MY_WIDGET" />
   </intent-filter>
   <meta-data
       android:name="android.appwidget.provider"
       android:resource="@xml/widget_info" />
</receiver>

 

Create a layout for the widget that is to be displayed on the homescreen. Remember to use only the views defined in the documentation. After the creation of the layout, create a custom widget updater which will broadcast the data from the app to the receiver to update the widget.

public class WidgetUpdater {
   public  static  void updateWidget(Context context){
       int widgetIds[] = AppWidgetManager.getInstance(context.getApplicationContext()).getAppWidgetIds(new ComponentName(context.getApplicationContext(), BookmarkWidgetProvider.class));
       BookmarkWidgetProvider bookmarkWidgetProvider = new BookmarkWidgetProvider();
       bookmarkWidgetProvider.onUpdate(context.getApplicationContext(), AppWidgetManager.getInstance(context.getApplicationContext()),widgetIds);
       context.sendBroadcast(new Intent(BookmarkWidgetProvider.ACTION_UPDATE));
   }
}

 

Next, create a custom RemoteViewService to update the views in the widget. The reason this is required is because the app widget does not operate in the usual lifecycle of the app. And therefore a remote service is required which acts as the remote adapter to connect to the remote views. In your class, override the onGetViewFactory() method and create a new remoteViewsFactory object to get the the data from the app on updation of the bookmark list. To populate the remote views, override the getViewsAt() method.

public class BookmarkWidgetRemoteViewsService extends RemoteViewsService {

@Override
public RemoteViewsFactory onGetViewFactory(Intent intent) {

return new RemoteViewsFactory() {
   private MatrixCursor data = null;

   @Override
   public void onCreate() {
       //Called when your factory is first constructed.
   }

   @Override
   public void onDataSetChanged() {
       }

   @Override
   public RemoteViews getViewAt(int position) {
       } 
   }
}

 

Finally, create a custom AppWidgetProvider which parses the relevant fields out of the intent and updates the UI. It acts like a broadcast receiver, hence all the updates by the widgetUpdater is received here.

public class BookmarkWidgetProvider extends AppWidgetProvider {

   public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) {
	  RemoteViews views = new RemoteViews(context.getPackageName(), R.layout.bookmark_widget);
             setRemoteAdapter(context, views);

   }

   @Override
   public void onReceive(@NonNull Context context, @NonNull Intent intent) {
       super.onReceive(context, intent);
   }

   private void setRemoteAdapter(Context context, @NonNull final RemoteViews views) {
       views.setRemoteAdapter(R.id.widget_list,
               new Intent(context, BookmarkWidgetRemoteViewsService.class));
   }

}

 

Conclusion

For any event based apps, it is crucial that it regularly provide updates to its users and therefore app widget forms an integral part of that whole experience.

References

 

 

Continue ReadingCreate an App Widget for Bookmarked Sessions for the Open Event Android App

Using Travis CI to Generate Sample Apks for Testing in Open Event Android

In the Open Event Android app we were using Travis already to push an apk of the Android app to the apk branch for easy testing after each commit in the repo. A better way to test the dynamic nature of the app would be to use the samples of different events from the Open Event repo to generate an apk for each sample. This could help us identify bugs and inconsistencies in the generator and the Android app easily. In this blog I will be talking about how this was implemented using Travis CI.

What is a CI?

Continuous Integration is a DevOps software development practice where developers regularly push their code changes into a central repository. After the merge automated builds and tests are run on the code that has been pushed. This helps developers to identify bugs in code quite easily. There are many CI’s available such as Travis, Codecov etc.

Now that we are all caught up with let’s dive into the code.

Script for replacing a line in a file (configedit.sh)

The main role of this script would be to replace a line in the config.json file. Why do we need this? This would be used to reconfigure the Api_Link in the config.json file according to our build parameters. If we want the apk for Mozilla All Hands 2017 to be built, I would use this  script to replace the Api_Link in the config.json file to the one for Mozilla All Hands 2017.

This is what the config.json file for the app looks like.

{
   "Email": "dev@fossasia.org",
   "App_Name": "Open Event",
   "Api_Link": "https://eventyay.com/api/v1/events/6/"
 }

We are going to replace line 4 of this file with

“Api_Link”:”https://raw.githubusercontent.com/fossasia/open-event/master/sample/MozillaAllHands17

VAR=0
 STRING1=$1
 while read line
 do
 ((VAR+=1))
 if [ "$VAR" = 4 ]; then
 echo "$STRING1"
 else
 echo "$line"
 fi
 done < app/src/main/assets/config.json

The script above reads the file line by line. If it reaches line 4 it would print out the string that was given into the script as a parameter else it just prints out the line in the file.

This script would print the required file for us in the terminal if called but NOT create the required file. So we redirect the output of this file into the same file config.json.

Now let’s move on to the main script which is responsible for the building of the apks.

Build Script(generate_apks.sh)

Main components of the script

  • Build the apk for the default sample i.e FOSSASIA17 using the build scripts ./gradlew build and ./gradlew assembleRelease.
  • Store all the Api_Links and apk names for which we need the apks for different arrays
  • Replace the Api_Link in the json file found under android/app/src/main/assets/config.json using the configedit.sh.
  • Run the build scripts ./gradlew build and ./gradlew assembleRelease to generate the apk.
  • Move the generated apk from app/build/outputs/apk/ to a folder called daily where we store all the generated apks.
  • We then repeat this process for the other Api_Links in the array.

As of now we are generating the apks for the following events:

  1. FOSSASIA 17
  2. Mozilla All Hands 17
  3. Google I/O 17
  4. Facebook Developer Conference 17

Care is also taken to avoid all these builds if it is a PR. All the apks are generated only when there is a commit on the development branch i.e when the PR is merged.

Usage of Scripts in .travis.yml

To add Travis integration for the repo we need to include a file named .travis.yml in the repo which indicates Travis CI what to build.

language: android
 ….
 jdk: oraclejdk8
 ….
 before_script:
   - cd android
 script:
   - chmod +x generate_apks.sh
   - chmod +x configedit.sh
   - ./generate_apks.sh
 …..
 after_success:
   - bash <(curl -s https://codecov.io/bash)
   - cd ..
   - chmod +x upload-apk.sh
   - ./upload-apk.sh

In this file we need to define the language for which Travis will build. Here we indicate that it is android. We also specify the jdk version to be used.

Now let’s talk about the other parts of this snippet.

  • before_script : Executes the bash instructions before the travis build starts. Here we do cd android so that we can access gradlew for building the apk.
  • script : This section consists of the instruction to be executed for the build. Here we give executable rights to the two scripts that we have written sh and . Then ./generate_apks is called and the project build starts. All the apks get saved to the folder daily.
  • after_success : This section consists the instructions that are run after the script executes successfully. Here we see that we run a script called sh. This script is responsible of pushing the generated apk files in an orphan branch called apk.

Some points of Interest

  • If the user/developer testing the apk is in the offline state and then comes online there will be database inconsistencies as data from the local assets as well as the data from the Api_Link would appear in the app.
  • When the app generator CLI is ready we can use it to trigger the builds instead of just replacing the Api_Link. This would also be effective in testing the app generator simultaneously.

Now we have everything setup to trigger builds for various samples after each commit.

Resources

Continue ReadingUsing Travis CI to Generate Sample Apks for Testing in Open Event Android

Adding Event Overview Route in Open Event Frontend

In Open Event Frontend we have an event overview route which is like a mini dashboard for an event where information regarding event sponsors, general info, roles, tickets, event setup etc. is present. All of the information is present in their corresponding components and this dashboard is made up of those components. To create this dashboard we will first create its components.

To create a component we will use following ember command-

ember -g component <component-name>

This command will give us three files: a template, a component and a test file corresponding to that component. We will use this command to generate all our components.

Now let’s discuss each component separately and see how many of them are combined to form this route-

The event-setup-checklist component contains semantic ui’s steps to maintain checklist of basic-details, sponsors, session & microlocation, call for speakers, session and speakers form customization so that it becomes easy to identify which step is complete and which is not.

Next is general-info component which shows basic information about an event like start-time, end-time, location, number of speakers, number of sponsors etc. It also shows whether the event is live or not.

In manage-roles component, manage the role for a given person, add people and assign different roles to them, edit roles for different people. Also we can see who are invited for a given role and who accepted them.

In event-sponsors component we manage the sponsors for the event, edit an existing sponsor, add a new sponsor with their logo, name, type and level. Also we can delete an existing sponsor.

Next is the ticket component which displays the details of number of orders, number of tickets sold, and total sales. Also it displays the number of types of tickets are sold.

Next is our app-component which has two choices. First is to generate android app for the event and second is to generate webapp of the event.

And finally in our view.index template, we add these components using ui stackable grid layout. Whenever we want to conditionally show or hide a component, we can do that in our event.index template and hence it becomes very easy to manage huge amounts content on a single page.

Resources

Continue ReadingAdding Event Overview Route in Open Event Frontend

File Upload Validations on Open Event Frontend

In Open Event Frontend we have used semantics ui’s form validations to validate different fields of a form. There are certain instances in our app where the user has to upload a file and it is to be validated against the suggested format before uploading it to the server. Here we will discuss how to perform the validation.

Semantics ui allows us to validate by facilitating pass of an object along with rules for its validation. For fields like email and contact number we can pass type as email and number respectively but for validation of file we have to pass a regular expression with allowed extension.The following walks one through the process.

fields : {
  file: {
    identifier : 'file',
    rules      : [
      {
        type   : 'empty',
        prompt : this.l10n.t('Please upload a file')
      },
      {
        type   : 'regExp',
        value  : '/^(.*.((zip|xml|ical|ics|xcal)$))?[^.]*$/i',
        prompt : this.l10n.t('Please upload a file in suggested format')
      }
    ]
  }
}

Here we have passed file element (which is to be validated) inside our fields object identifier, which for this field is ‘file’, and can be identified by its id, name or data-validate property of the field element. After that we have passed an array of rules against which the field element is validated. First rule gives an error message in the prompt field in case of an empty field.

The next rule checks for allowed file extensions for the file. The type of the rule will be regExp as we are passing a regular expression which is as follows-

/^(.*.((zip|xml|ical|ics|xcal)$))?[^.]*$/i

It is little complex to explain it from the beginning so let us breakdown it from end-

 

$ Matches end of the string
[^.]* Negated set. Match any character not in this set. * represents 0 or more preceding token
( … )? Represents if there is something before (the last ?)
.*.((zip|xml|ical|ics|xcal)$) This is first capturing group ,it contains tocken which are combined to create a capture group ( zip|xml|ical|ics|xcal ) to extract a substring
^ the beginning of the string

Above regular expression filters all the files with zip/xml/ical/xcal extensions which are the allowed format for the event source file.

References

  • Ivaylo Gerchev blog on form validation in semantic ui
  • Drmsite blog on semantic ui form validation
  • Semantic ui form validation docs
  • Stackoverflow regex for file extension
Continue ReadingFile Upload Validations on Open Event Frontend