Receiving Data From the Network Asynchronously

We often need to receive information from networks in Android apps. Although it is a simple process but the problem arises when it is done through the main user interaction thread. To understand this problem better consider using an app which shows some text downloaded from the online database. As the user clicks a button to display a text it may take some time to get the HTTP response. So what does an activity do in that time? It creates a lag in the user interface and makes the app to stop responding. Recently I implemented this in Connfa App to download data in Open Event Format.

To solve that problem we use concurrent thread to download the data and send the result back to be processed separating it from main user interaction thread. AsyncTask allows you to perform background operations and publish results on the UI thread without having to manipulate threads and/or handlers.

AsyncTask is designed to be a helper class around Thread and Handler and does not constitute a generic threading framework. AsyncTasks should ideally be used for short operations (a few seconds at the most). If you need to keep threads running for long periods of time, it is highly recommended you use the various APIs provided by the java.util.concurrent package such as Executor, ThreadPoolExecutor and FutureTask.

An asynchronous task is defined by a computation that runs on a background thread and whose result is published on the UI thread. An asynchronous task is defined by 3 generic types, called Params, Progress and Result, and 4 steps, called onPreExecute, doInBackground, onProgressUpdate and onPostExecute.

In this blog, I describe how to download data in an android app with AsyncTask using OkHttp library.

First, you need to add dependency for OkHttp Library,

compile 'com.squareup.okhttp:okhttp:2.4.0' 


Extend AsyncTask class and implement all the functions in the class by overriding them to modify them accordingly. Here I declare a class named AsyncDownloader extending AsyncTask. We took the string parameter, integer type progress and string type result to return instead which will be our JSON data and param is our URL. Instead of using get the function of AsyncTask we implement an interface to receive data so the main user interaction thread does not have to wait for the return of the result.

public class AsyncDownloader extends AsyncTask<String, Integer, String> {
    private String jsonData = null;

    public interface JsonDataSetter {
        void setJsonData(String str);
    }


    JsonDataSetter jsonDataSetterListener;

    public AsyncDownloader(JsonDataSetter context) {

        this.jsonDataSetterListener = context;
    }

    @Override
    protected void onPreExecute() {
        super.onPreExecute();
    }

    @Override
    protected String doInBackground(String... params) {

        String url = params[0];

        OkHttpClient client = new OkHttpClient();
        Request request = new Request.Builder()
                .url(url)
                .build();
        Call call = client.newCall(request);

        Response response = null;

        try {
            response = call.execute();

            if (response.isSuccessful()) {
                jsonData = response.body().string();

            } else {
                jsonData = null;
            }

        } catch (IOException e) {
            e.printStackTrace();
        }


        return jsonData;
    }

    @Override
    protected void onPostExecute(String jsonData) {
        super.onPostExecute(jsonData);
        jsonDataSetterListener.setJsonData(jsonData);
    }
}


We download the data in doInBackground making a request using OkHttp library and send the result back in onPostExecute method using our interface method. See above to check out the complete class implementation.

You can make an instance to AsyncDownloader in an activity. Implement interface to receive that data.

Tip: Always close AsyncTask after using to avoid many concurrent tasks at one time.

Find the complete code here.

References:

Continue ReadingReceiving Data From the Network Asynchronously

Storing SUSI Settings for Anonymous Users

SUSI Web Chat is equipped with a number of settings by which it can be configured. For a user who is logged in and using the chat, he can change his settings and save them by sending it to the server. But for an anonymous user, this feature is not available while opening the chat every time, unless one stores the settings. Thus to overcome this problem we store the settings in the browser’s cookies which we do by following the steps.

The current User Settings available are :-

  1. Theme – One can change the theme to either ‘dark’ or ‘light’.
  2. To have the option to send a message on enter.
  3. Enable mic to give voice input.
  4. Enable speech output only for speech input.
  5. To have speech output always ON.
  6. To select the default language.
  7. To adjust the speech output rate.
  8. To adjust the speech output pitch.

The first step is to have a default state for all the settings which act as default settings for all users when opening the chat for the first time. We get these settings from the User preferences store available at the file UserPreferencesStore.js

let _defaults = {
                Theme: 'light',
                Server: 'http://api.susi.ai',
                StandardServer: 'http://api.susi.ai',
                EnterAsSend: true,
                MicInput: true,
                SpeechOutput: true,
               SpeechOutputAlways: false,
               SpeechRate: 1,
               SpeechPitch: 1,
           };

We assign these default settings as the default constructor state of our component so that we populate them beforehand.

2. On changing the values of the various settings we update the state through various function handlers which are as follows-

handleSelectChange – handles all the theme changes, handleEnterAsSendhandleMicInput – handles Mic Input as true, handleSpeechOutput – handles whether Speech Output should be enabled or not, handleSpeechOutputAlways – handles if user wants to listen to speech after every input, handleLanguage – handles the language settings related to speech, handleTextToSpeech – handles the Text to Speech Settings including Speech Rate, Pitch.        

  1. Once the values are changed we make use of the universal-cookie package and save the settings in the User’s browser. This is done in a function handleSubmit in the Settings.react.js file.

        

 import Cookies from 'universal-cookie';
     const cookies = new Cookies(); // Create cookie object.
     // set all values
     let vals = {
            theme: newTheme,
            server: newDefaultServer,
            enterAsSend: newEnterAsSend,
            micInput: newMicInput,
            speechOutput: newSpeechOutput,
            speechOutputAlways: newSpeechOutputAlways,
            rate: newSpeechRate,
            pitch: newSpeechPitch,
      }

    let settings = Object.assign({}, vals);
    settings.LocalStorage = true;
    // Store in cookies for anonymous user
    cookies.set('settings',settings);   
  1. Once the values are set in the browser we load the new settings every time the user opens the SUSI Web Chat by having the following checks in the file API.actions.js. We get the values from the cookies using the get function if the person is not logged in and initialise the new settings for that user.
if(cookies.get('loggedIn')===null||
    cookies.get('loggedIn')===undefined){
    // check if not logged in and get the value from the set cookie
    let settings = cookies.get('settings');
    if(settings!==undefined){
      // Check if the settings are set in the cookie
      SettingsActions.initialiseSettings(settings);
    }
  }
else{
// call the AJAX for getting Settings for loggedIn users
}

Resources

Continue ReadingStoring SUSI Settings for Anonymous Users

Advantage of Open Event Format over xCal, pentabarf/frab XML and iCal

Event apps like Giggity and Giraffe use event formats like xCal, pentabarf/frab XML, iCal etc. In this blog, I present some of the advantages of using FOSSASIA’s Open Event data format over other formats. I added support for Open Event format in these two apps so I describe the advantages and improvements that were made with respect to them.

  • The main problem that is faced in Giggity app is that all the data like social links, microlocations, the link for the logo file etc., can not be fetched from a single file, so a separate raw file is being added to provide this data. Our Open Event format provides all this information from the single URL that could be received from the server so no need to use any separate file.
  • Here is the pentabarf format data for FOSSASIA 2016 conference excluding sessions. Although it provides all the necessary information it leaves the information for logo URL, details for longitude and latitude for microlocations (rooms) and links to social media and website. While the open event format provides all the missing details including some extra information like language, comments etc. See FOSSASIA 2016 Open Event format sample.
<conference>
<title>FOSSASIA 2016</title>
<subtitle/>
<venue>Science Centre Road</venue>
<city>Singapore</city>
<start>2016-03-18</start>
<end>2016-03-20</end>
<days>3</days>
<day_change>09:00:00</day_change>
<timeslot_duration>00:00:00</timeslot_duration>
</conference>
  • The parsing of received file format gets very complicated in case of iCal, xCal etc. as tags needs to be matched to get the data. Howsoever there are various libraries available for parsing JSON data. So we can create simply an array list of the received data to send it to the adapter. See this example for more information of working code. You can also see the parser for iCal to compare the complexity of the code.
  • The other more common problem is the structure of the formats received is sometimes it becomes complicated to define the sub parts of a single element. For example for the location we define latitude and longitude separately while in iCal format it is just separated by a comma. For example for

iCal

GEO:1.333194;103.736132

JSON

{
           "id": 1,
           "name": "Stage 1",
           "floor": 0,
           "latitude": 37.425420,
           "longitude": -122.080291,
           "room": null
}

And the information provided is more detailed.

  • Open Event format is well documented and it makes it easier for other developers to work on it. Find the documentation here.

References:

 

Continue ReadingAdvantage of Open Event Format over xCal, pentabarf/frab XML and iCal

Scheduling Jobs to Check Expired Access Token in Yaydoc

In Yaydoc, we use the user access token to do various tasks like pushing documentation, registering webhook and to see it’s status.The user access token is very important to us, so I decided of adding Cron job which checks whether the user token has expired or not. But then one problem was that, if we have more number of users our cron job will send thousands of request at a time so it can break the app. So, I thought of queueing the process. I used `asyc` library for queuing the job.

const github = require("./github")
const queue = require("./queue")

User = require("../model/user");

exports.checkExpiredToken = function () {
  User.count(function (error, count) {
    if (error) {
      console.log(error);
    } else {
      var page = 0;
      if (count < 10) {
        page = 1;
      } else {
        page = count / 10;
        if (page * 10 < count) {
          page = (count + 10) /10;
        }
      }
      for (var i = 0; i <= page; i++) {
        User.paginateUsers(i, 10,
        function (error, users) {
          if (error) {
            console.log(error);
          } else {
            users.forEach(function(user) {
              queue.addTokenRevokedJob(user);
            })
          }
        })
      }
    }
  })
}

In the above code I’m paginating the list of users in the database and then I’m adding each user to the queue.

var tokenRevokedQueue = async.queue(function (user, done) {
  github.retriveUser(user.token, function (error, userData) {
    if (error) {
      if (user.expired === false) {
        mailer.sendMailOnTokenFailure(user.email);
        User.updateUserById(user.id, {
          expired: true
        }, function(error, data) {
          if (error) {
            console.log(error);
          }
        });
      }
      done();
    } else {
      done();
    }
  })
}, 2);

I made this queue with the help of async queue method. In the first parameter, I’m passing the logic and in second parameter, I’m passing how many jobs can be executed asynchronously. I’m checking if the user has revoked the token or not by sending API requests to GitHub’s user API. If it gives a response ‘200’ then the token is valid otherwise it’s invalid. If the user token is invalid, I’m sending email to the user saying that “Your access token in revoked so Sign In once again to continue the service”.

Resources:

Continue ReadingScheduling Jobs to Check Expired Access Token in Yaydoc

Making a Sticky Top Navigation bar for Susper using Angular

A lot of websites, require a top navigation bar that sticks to the top, irrespective of the screen dimension size. This blog deals with how the top navigation bar was made sticky in Susper.

  1. Using the correct Bootstrap classes. Notice the code enveloping the navigation bar.

<nav class=“top-nav navbar navbar-static-top navbar-default”>

class=“container-fluid”>

class=“navbar-header” id=“navcontainer”>

</div>
</div>
</nav>

Points to note:

  • Using navbar and navbar-default creates a standard gray navigation bar.
  • Using navbar-static-top makes the navbar stick only to the top of the page and disappear on scrolling down.
  • Using container-fluid creates a container for the contents of the navbar with wide margins
  1.  Now we also need to write some personalized CSS code. Notice the classes navcontainer and  top-nav. This is the CSS code for these classes:

.top-nav{
margin-bottom: 0;
}
#navcontainer {
height: 65px;
width: 100vw;
}#navcontainer ul {
margin: 0;
padding: 0;
list-style-type: none;
}

Points to note:

  • Margin and padding can be set according to how the navbar should look. Click here to know the difference between margins and padding.
  • The height has been customized to 65px in Susper, with a width of 100vw(entire viewpost width).
  1. Lastly, if your navigation bar is inside the body tag, remember that by default, body has a top margin of 57 px. As a result you may see an extra white space on top of your navigation bar. To remove this:
  • Move the navigation bar code out of the body tag. If you can’t then,
  • Place your navigation bar in a container ( resultContainer on the Susper result page) and write this in your CSS file.

.resultContainer{
margin-top: -57px;
}

References:

Continue ReadingMaking a Sticky Top Navigation bar for Susper using Angular

Persistently Storing loklak Server Dumps on Kubernetes

In an earlier blog post, I discussed loklak setup on Kubernetes. The deployment mentioned in the post was to test the development branch. Next, we needed to have a deployment where all the messages are collected and dumped in text files that can be reused.

In this blog post, I will be discussing the challenges with such deployment and the approach to tackle them.

Volatile Disk in Kubernetes

The pods that hold deployments in Kubernetes have disk storage. Any data that gets written by the application stays only until the same version of deployment is running. As soon as the deployment is updated/relocated, the data stored during the application is cleaned up.


Due to this, dumps are written when loklak is running but they get wiped out when the deployment image is updated. In other words, all dumps are lost when the image updates. We needed to find a solution to this as we needed a permanent storage when collecting dumps.

Persistent Disk

In order to have a storage which can hold data permanently, we can mount persistent disk(s) on a pod at the appropriate location. This ensures that the data that is important to us stays with us, even
when the deployment goes down.


In order to add persistent disks, we first need to create a persistent disk. On Google Cloud Platform, we can use the gcloud CLI to create disks in a given region –

gcloud compute disks create --size=<required size> --zone=<same as cluster zone> <unique disk name>

After this, we can mount it on a Docker volume defined in Kubernetes configurations –

      ...
      volumeMounts:
        - mountPath: /path/to/mount
          name: volume-name
  volumes:
    - name: volume-name
      gcePersistentDisk:
        pdName: disk-name
        fsType: fileSystemType

But this setup can’t be used for storing loklak dumps. Let’s see “why” in the next section.

Rolling Updates and Persistent Disk

The Kubernetes deployment needs to be updated when the master branch of loklak server is updated. This update of master deployment would create a new pod and try to start loklak server on it. During all this, the older deployment would also be running and serving the requests.


The control will not be transferred to the newer pod until it is ready and all the probes are passing. The newer deployment will now try to mount the disk which is mentioned in the configuration, but it would fail to do so. This would happen because the older pod has already mounted the disk.


Therefore, all new deployments would simply fail to start due to insufficient resources. To overcome such issues, Kubernetes allows persistent volume claims. Let’s see how we used them for loklak deployment.

Persistent Volume Claims

Kubernetes provides Persistent Volume Claims which claim resources (storage) from a Persistent Volume (just like a pod does from a node). The higher level APIs are provided by Kubernetes (configurations and kubectl command line). In the loklak deployment, the persistent volume is a Google Compute Engine disk –

apiVersion: v1
kind: PersistentVolume
metadata:
  name: dump
  namespace: web
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: slow
  gcePersistentDisk:
    pdName: "data-dump-disk"
    fsType: "ext4"

[SOURCE]

It must be noted here that a persistent disk by the name of data-dump-index is already created in the same region.


The storage class defines the way in which the PV should be handled, along with the provisioner for the service –

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: slow
  namespace: web
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
  zone: us-central1-a

[SOURCE]

After having the StorageClass and PersistentVolume, we can create a claim for the volume by using appropriate configurations –

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: dump
  namespace: web
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi
  storageClassName: slow

[SOURCE]

After this, we can mount this claim on our Deployment –

  ...
  volumeMounts:
    - name: dump
      mountPath: /loklak_server/data
volumes:
  - name: dump
    persistentVolumeClaim:
      claimName: dump

[SOURCE]

Verifying persistence of Dumps

To verify this, we can redeploy the cluster using the same persistent disk and check if the earlier dumps are still present there –

$ http http://link.to.deployment/dump/
HTTP/1.1 200 OK
Cache-Control: public, max-age=60
Content-Type: text/html;charset=utf-8
...


<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">

<h1>Index of /dump</h1>
<pre>      Name 
[gz ] <a href="messages_20170802_71562040.txt.gz">messages_20170802_71562040.txt.gz</a>   Thu Aug 03 00:07:21 GMT 2017   132M
[gz ] <a href="messages_20170803_69925009.txt.gz">messages_20170803_69925009.txt.gz</a>   Mon Aug 07 15:40:04 GMT 2017   532M
[gz ] <a href="messages_20170807_36357603.txt.gz">messages_20170807_36357603.txt.gz</a>   Wed Aug 09 10:26:24 GMT 2017   377M
[txt] <a href="messages_20170809_27974404.txt">messages_20170809_27974404.txt</a>      Thu Aug 10 08:51:49 GMT 2017  1564M
<hr></pre>
...

Conclusion

In this blog post, I discussed the process of deployment of loklak with persistent dumps on Kubernetes. This deployment is intended to work as root.loklak.org in near future. The changes were proposed in loklak/loklak_server#1377 by @singhpratyush (me).

Resources

Continue ReadingPersistently Storing loklak Server Dumps on Kubernetes

Export an Event using APIs of Open Event Server

We in FOSSASIA’s Open Event Server project, allow the organizer, co-organizer and the admins to export all the data related to an event in the form of an archive of JSON files. This way the data can be reused in some other place for various different purposes. The basic workflow is something like this:

  • Send a POST request in the /events/{event_id}/export/json with a payload containing whether you require the various media files.
  • The POST request starts a celery task in the background to start extracting data related to event and jsonifying them
  • The celery task url is returned as a response. Sending a GET request to this url gives the status of the task. If the status is either FAILED or SUCCESS then there is the corresponding error message or the result.
  • Separate JSON files for events, speakers, sessions, micro-locations, tracks, session types and custom forms are created.
  • All this files are then archived and the zip is then served on the endpoint /events/{event_id}/exports/{path}
  • Sending a GET request to the above mentioned endpoint downloads a zip containing all the data related to the endpoint.

Let’s dive into each of these points one-by-one

POST request ( /events/{event_id}/export/json)

For making a POST request you firstly need a JWT authentication like most of the other API endpoints. You need to send a payload containing the settings for whether you want the media files related with the event to be downloaded along with the JSON files. An example payload looks like this:

{
   "image": true,
   "video": true,
   "document": true,
   "audio": true
 }

def export_event(event_id):
    from helpers.tasks import export_event_task

    settings = EXPORT_SETTING
    settings['image'] = request.json.get('image', False)
    settings['video'] = request.json.get('video', False)
    settings['document'] = request.json.get('document', False)
    settings['audio'] = request.json.get('audio', False)
    # queue task
    task = export_event_task.delay(
        current_identity.email, event_id, settings)
    # create Job
    create_export_job(task.id, event_id)

    # in case of testing
    if current_app.config.get('CELERY_ALWAYS_EAGER'):
        # send_export_mail(event_id, task.get())
        TASK_RESULTS[task.id] = {
            'result': task.get(),
            'state': task.state
        }
    return jsonify(
        task_url=url_for('tasks.celery_task', task_id=task.id)
    )


Taking the settings about the media files and the event id, we pass them as parameter to the export event celery task and queue up the task. We then create an entry in the database with the task url and the event id and the user who triggered the export to keep a record of the activity. After that we return as response the url for the celery task to the user.

If the celery task is still underway it show a response with ‘state’:’WAITING’. Once, the task is completed, the value of ‘state’ is either ‘FAILED’ or ‘SUCCESS’. If it is SUCCESS it returns the result of the task, in this case the download url for the zip.

Celery Task to Export Event

Exporting an event is a very time consuming process and we don’t want that this process to come in the way of user interaction with other services. So we needed to use a queueing system that would queue the tasks and execute them in the background with disturbing the main worker from executing the other user requests. We have used celery to queue tasks in the background and execute them without disturbing the other user requests.

We have created a celery task namely “export.event” which calls the event_export_task_base() which in turn calls the export_event_json() where all the jsonification process is carried out. To start the celery task all we do is export_event_task.delay(event_id, settings) and it return a celery task object with a task id that can be used to check the status of the task.

@celery.task(base=RequestContextTask, name='export.event', bind=True)
def export_event_task(self, email, event_id, settings):
    event = safe_query(db, Event, 'id', event_id, 'event_id')
    try:
        logging.info('Exporting started')
        path = event_export_task_base(event_id, settings)
        # task_id = self.request.id.__str__()  # str(async result)
        download_url = path

        result = {
            'download_url': download_url
        }
        logging.info('Exporting done.. sending email')
        send_export_mail(email=email, event_name=event.name, download_url=download_url)
    except Exception as e:
        print(traceback.format_exc())
        result = {'__error': True, 'result': str(e)}
        logging.info('Error in exporting.. sending email')
        send_export_mail(email=email, event_name=event.name, error_text=str(e))

    return result


After exporting a path to the export zip is returned. We then get the downloading endpoint and return it as the result of the celery task. In case there is an error in the celery task, we print an entire traceback in the celery worker and return the error as a result.

Make the Exported Zip Ready

We have a separate export_helpers.py file in the helpers module of API for performing various tasks related to exporting all the data of the event. The most important function in this file is the export_event_json(). This file accepts the event_id and the settings dictionary. In the export helpers we have global constant dictionaries which contain the order in which the fields are to appear in the JSON files created while exporting.

Firstly, we create the directory for storing the exported JSON and finally the archive of all the JSON files. Then we have a global dictionary named EXPORTS which contains all the tables and their corresponding Models which we want to extract from the database and store as JSON.  From the EXPORTS dict we get the Model names. We use this Models to make queries with the given event_id and retrieve the data from the database. After retrieving data, we use another helper function named _order_json which jsonifies the sqlalchemy data in the order that is mentioned in the dictionary. After this we download the media data, i.e. the slides, images, videos etc. related to that particular Model depending on the settings.

def export_event_json(event_id, settings):
    """
    Exports the event as a zip on the server and return its path
    """
    # make directory
    exports_dir = app.config['BASE_DIR'] + '/static/uploads/exports/'
    if not os.path.isdir(exports_dir):
        os.mkdir(exports_dir)
    dir_path = exports_dir + 'event%d' % int(event_id)
    if os.path.isdir(dir_path):
        shutil.rmtree(dir_path, ignore_errors=True)
    os.mkdir(dir_path)
    # save to directory
    for e in EXPORTS:
        if e[0] == 'event':
            query_obj = db.session.query(e[1]).filter(
                e[1].id == event_id).first()
            data = _order_json(dict(query_obj.__dict__), e)
            _download_media(data, 'event', dir_path, settings)
        else:
            query_objs = db.session.query(e[1]).filter(
                e[1].event_id == event_id).all()
            data = [_order_json(dict(query_obj.__dict__), e) for query_obj in query_objs]
            for count in range(len(data)):
                data[count] = _order_json(data[count], e)
                _download_media(data[count], e[0], dir_path, settings)
        data_str = json.dumps(data, indent=4, ensure_ascii=False).encode('utf-8')
        fp = open(dir_path + '/' + e[0], 'w')
        fp.write(data_str)
        fp.close()
    # add meta
    data_str = json.dumps(
        _generate_meta(), sort_keys=True,
        indent=4, ensure_ascii=False
    ).encode('utf-8')
    fp = open(dir_path + '/meta', 'w')
    fp.write(data_str)
    fp.close()
    # make zip
    shutil.make_archive(dir_path, 'zip', dir_path)
    dir_path = dir_path + ".zip"

    storage_path = UPLOAD_PATHS['exports']['zip'].format(
        event_id=event_id
    )
    uploaded_file = UploadedFile(dir_path, dir_path.rsplit('/', 1)[1])
    storage_url = upload(uploaded_file, storage_path)

    return storage_url


After we receive the json data from the _order_json() function, we create a dump of the json using json.dumps with an indentation of 4 spaces and utf-8 encoding. Then we save this dump in a file named according to the model from which the data was retrieved. This process is repeated for all the models that are mentioned in the EXPORTS dictionary. After all the JSON files are created and all the media is downloaded, we make a zip of the folder.

To do this we use shutil.make_archive. It creates a zip and uploads the zip to the storage service used by the server such as S3, google storage, etc. and returns the url for the zip through which it can be accessed.

Apart from this function, the other major function in this file is to create an export job entry in the database so that we can keep a track about which used started a task related to which event and help us in debugging and security purposes.

Downloading the Zip File

After the exporting is completed, if you send a GET request to the task url, you get a response similar to this:

{
   "result": {
     "download_url": "http://localhost:5000/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
   },
   "state": "SUCCESS"
 }

So on opening the download url in the browser or using any other tool, you can download the zip file.

One big question however remains is, all the workflow is okay but how do you understand after sending the POST request, that the task is completed and ready to be downloaded? One way of solving this problem is a technique known as polling. In polling what we do is we send a GET request repeatedly after every fixed interval of time. So, what we do is from the POST request we get the url for the export task. You keep polling this task url until the state is either “FAILED” or “SUCCESS”. If it is a SUCCESS you append the download url somewhere in your website which can then clicked to download the archived export of the event.

 

Reference:

 

Continue ReadingExport an Event using APIs of Open Event Server

Making Customized and Mobile Responsive Drop-down Menus in Susper using Angular

In  Susper, the drop-down menu is customized with colorful search icons and we wanted to maintain the same menu for mobile screens too, however the drop-down menu disappeared for all screens with width less than 767px. This blog can be used to learn how to create css classes for such drop-down menus without using any bootstrap.
This is how the issue was solved.

  1. Replacing standard bootstrap classes : The drop-down menu blocks had a source code as follows:

class=“dropdown-menu”>

class=“row”>

class=“col-sm-4”>

class=“block”>

</div>
</div>

Using col-sm-4 will do the following

  • For widths greater than 767px: Divide each row into four equally sized columns.
  • For widths smaller than 767px: Stack all the columns on top of each other.

Since the drop-down menu’s design was to remain intact, I made the following changes:

  • Replace row with menu-row
  • Replace col-sm-4 with menu-item

Now I wrote personalized css for these classes.

.menu-row{
width: 267px;
gridtemplatecolumns: 1fr 1fr 1fr;
background-color: white;
}
.menu-item{
display: inlineblock;
width: 86px;
}
  • Width: It is used to set the width of the div class, each row now has a width of 267px, with each column in it having a width of 86px.
  • Grid-template-columns: It is used to layout the structure of the template, here 1fr 1fr 1fr represents that there will be three columns in a row.
  • Display: The display is set to inline block to overwrite the default property of the div element to start in a new line.
  1. Custom css for small screens : In standard bootstrap, for screen sizes less than 767px, dropdown class has properties like transparent background, no border etc. that need to be over written. So we add a new id for the div tag as shown:

<div id=“small-drop” class=“dropdown-menu”>

/** Now we add css for it, as shown: **/
@media screen and (max-width: 767px) {
#small-drop{
position: absolute;
background-color: white;
border: 1px solid #cccccc;
right: -38px;
left: auto;
}

  • Position : absolute is used to make sure all our values are absolute and not relative to the higher div hierarchically
  • Border: The values for the border represent the following respectively: Thickness, Style and Color.
  • Auto: Here the value auto for left signifies that there is no fixed value for the left margin, it can take the default value

References:

  1. For working of grids in Bootstrap: https://www.w3schools.com/bootstrap/bootstrap_grid_examples.asp
  2. A useful article for difference between id and class: https://css-tricks.com/the-difference-between-id-and-class

 

Continue ReadingMaking Customized and Mobile Responsive Drop-down Menus in Susper using Angular

Basics behind BJT and FET experiments in PSLab

A high school student in his curriculum; will come across certain electronics and electrical experiments. One of them related to semiconductor devices such as Bipolar Junction Transistors (BJTs) and Field Effect Transistors (FETs). PSLab device is capable of function as a waveform generator, voltage and current source, oscilloscope and multimeter. Using these functionalities one can design an experiment. This blog post brings out the basics one should know about the experiment and the PSLab device to program an experiment in the saved experiments section.

Channels and Sources in the PSLab Device

The PSLab device has three pins dedicated to function as programmable voltage sources (PVS) and one pin for programmable current source (PCS).

Programmable Voltage Sources can generate voltages as follows;

  • PV1 →  -5V ~ +5V
  • PV2 → -3.3V ~ +3.3V
  • PV3 → 0 ~ +3.3V

Programmable Current Source (PCS) can generate current as follows;

  • PCS → 0 ~ 3.3mA

The device has 4 channel oscilloscope out of those CH1, CH2 and CH3 pins are useful in experiments of the current context type.

About BJTs and FETs

Every semiconductor device is made of Silicon(Si). Some are made of Germanium(Ge) but they are not widely used. Silicon material has a potential barrier of 0.7 V among P type and N type sections of a semiconductor device. This voltage value is really important in an experiment as in some practicals such as “BJT Amplifier”, there is no use of a voltage value setting below this value. So the experiment needs to be programmed to have 0.7V as the minimum voltage for Base terminal.

Basic BJT experiments

BJTs have three pins. Collector, Emitter and Base. Current to the Base pin will control the flow of electrons from Emitter to Collector creating a voltage difference between Collector and Emitter pins. This scenario can be taken down to three types as;

  • Input Characteristics → Relationship between Emitter current to VBE(Base to Emitter)
  • Output Characteristics → Relationship between IC(Collector) to VCB(Collector to Base)
  • Transfer Characteristics → Relationship between IC(Collector) to IE(Emitter)

Input Characteristics

Output Characteristics

Transfer Characteristics

     

Basic FET experiments

FETs have three pins. Drain, Source and Gate. Voltage to Gate terminal will control the electron flow from either direction from or to Source and Drain. This scenario results in two types of experiments;

  • Output Characteristics → Drain current to Drain to Source voltage difference
  • Transfer Characteristics → Gate to Source voltage to Drain current
Output Characteristics Transfer Characteristics

Using existing methods in PSLab android repository

Current implementation of the android application consists of all the methods required to read voltages and currents from the relevant pins and fetch waveforms from the channel pins and output voltages from PVS pins.

ScienceLab.java class – This class implements all the methods required for any kind of an experiment. The methods that will be useful in designing BJT and FET related experiments are;

Set Voltages

public void setPV1(float value);

public void setPV2(float value);

public void setPV3(float value);

Set Currents

public void setPCS(float value);

Read Voltages

public double getVoltage(String channelName, Integer sample);

Read Currents

To read current there is no direct way implemented. The current flow between two nodes can be calculated using the PVS pin value and the voltage value read from the channel pins. It uses Ohm’s law to calculate the value using the known resistance between two nodes.

In the following schematic; the collector current can be calculated using known PV1 value and the measured CH1 value as follows;

IC = (PV1 – CH1) / 1000

This is how it is actually implemented in the existing experiments.

If one needs to implement a new experiment of any kind, these are the basics need to know. There can be so many new experiments implemented using these basics. Some of them could be;

  • Effect of Temperature coefficient in Collector current
  • The influence in β factor in Collector current

Resources:

Continue ReadingBasics behind BJT and FET experiments in PSLab

Making Autocomplete Box Compatible with the Search Bar using Angular in Susper

A major problem in Susper was that we were using the same components on different pages, with different styling properties. A major issue was that the Autocomplete box was not properly aligned in the index page and looked like this:

This was happening because the autocomplete box width was set for 634 px, a width perfect for the search bar in the results page. The index page had a search bar of width 534 px, and the autocomplete box was too large for that.
Here is how the issue was solved:

  1. Changing the suggestion box html code:

id=“sug-box” class=“suggestion-box” *ngIf=”results.length0”>

</div>

The code uses *ngIf which is why setting the autocomplete box width using the typescript files becomes impossible. *ngIf does not load the component into the DOM until there are results, hence we didnot have the autocomplete box in the DOM until after the query was typed in the search bar. That was why we could not set its width, hence it was decided to remove this attribute. Using the ‘hidecomponent emitter’ is a better option here (used in the typescript file).

@Output() hidecomponent: EventEmitter<any> = new EventEmitter<any>();
if (this.results.length === 0) {
this.hidecomponent.emit(1);
} else {
this.hidecomponent.emit(0);
}

See autocomplete.component.ts for the complete code.

  1. It is now required to dynamically change the id of the suggestion-box depending on the page it is on, and apply the correct CSS.

Here is the html code:

<div [id]=”getID()” class=“suggestion-box” *ngIf=“results”>

The id of the suggestion box will now depend on the value returned from the function getID(), defined as follows:

getID() {
if ( this.route.url.toString() === ‘/’) {
  return ‘index-sug-box’;
} else {
  return ‘sug-box’
}
}
  • We first check if the route url is simply ‘/’ (which implies it is in the index page).
  • If yes the id is set to index-sug-box otherwise to sug-box.

Now we can write extra CSS properties for the index-sug-box id as follows:

#index-sug-box{
width: 586px;
}

References:

  1. For basic javascript functions: https://www.w3schools.com/js/js_functions.asp
  2. To understand components in Angular: https://angular.io/api/core/Component

 

Continue ReadingMaking Autocomplete Box Compatible with the Search Bar using Angular in Susper