Creating Multiple Device Compatible Layouts in PSLab Android

The developer's goal is that PSLab Android App as an app should run smoothly on all the variety of Android devices out in the market. There are two aspects of it - the app should be able to support maximum number of Android versions possible which is related to the core software part and the other being the app should be able to generate the same user experience on all sizes of screens. This post focuses on the later. There are a whole range of android devices available in the market right from 4 inch mobile phones to 12 inch tablets and the range in the screen sizes is quite large. So, the challenge in front of app designers is to make the app compatible with the maximum  number of devices without doing any specific tweaks related to a particular resolution range. Android has its mechanism of scaling the app as per the screen size and it does a good job almost all the time, however, still there are cases where android fails to scale up or scale down the app leading to distorted layout of the app. This blog discusses some of the tricks that needs to be kept in mind while designing layouts that work independent of screen sizes. Avoid using absolute dimensions It is one of the most common things to keep in mind before starting any UI design. Use of absolute dimensions like px, inch etc. must be avoided every time as they are fixed in size and don’t scale up or scale down while screen sizes are changed. Instead relative dimensions like dp should be used which depend on the resolution and scale up or scale down. ( It’s a fair assumption that bigger screens will have better resolution compared to the smaller ones although exceptions do exist) . Ensure the use of correct layout/View group Since, android provides a variety of layouts like Linearlayout, Constrainedlayout, Relativelayout, Tablelayout and view groups like ScrollView, RecyclerView, ListView etc. it is often confusing to know which layout/viewgroup should be used. The following list gives a rough idea of when to use a particular layout or view group. Linearlayout - Mostly used for simple designs when the elements are stacked in ordered horizontal/vertical fashion and it needs explicit declaration of orientation. Relativelayout - Mostly used when the elements need to defined relative to the parent or the neighbouring elements. Since, the elements are relative, there is no need to define the orientation. Constraintlayout - It has all the features of Relativelayout and in addition a feature of adding constraints to the child elements or neighbouring elements. Tablelayout - Tablelayout is helpful to when all the views/widgets are arranged in an ordered fashion. All the above layouts can be used interchangeably most of the times, however, certain cases make some more favourable than others like when than views/ widgets are not present in an organised manner, it is better to stick to Linearlayout or Relativelayout. ListView - Used when the…

Continue ReadingCreating Multiple Device Compatible Layouts in PSLab Android

Using Sensors with PSLab Android App

The PSLab Android App as of now supports quite a few sensors. Sensors are an essential part of many science experiments and therefore PSLab has a feature to support plug & play sensors. The list of sensors supported by PSLab can be found here. AD7718 - 24-bit 10-channel Low voltage Low power Sigma Delta ADC AD9833 - Low Power Programmable Waveform generator ADS1115 - Low Power 16 bit ADC BH1750 - Light Intensity sensor BMP180 - Digital Pressure Sensor HMC5883L - 3-axis digital magnetometer MF522 - RFID Reader MLX90614 - Infrared thermometer MPU6050 - Accelerometer & gyroscope MPU925x - Accelerometer & gyroscope SHT21 - Humidity sensor SSD1306 - Control for LED matrix Sx1276 - Low Power Long range Transceiver TSL2561 - Digital Luminosity Sensor All the sensors except Sx1276 communicate using the I2C protocol whereas the Sx1276 uses the SPI protocol for communication. There is a dedicated set of ports on the PSLab board for the communication under the label I2C with the ports named 3.3V, GND, SCL & SDA. Fig; PSLab board sketch Any I2C sensor has ports named 3.3V/VCC, GND, SCL, SDA at least along with some other ports in some sensors. The connections are as follows: 3.3V on PSLab - 3.3V/VCC on sensor GND on PSLab - GND on sensor SCL on PSLab - SCL on sensor SDA on PSLab - SDA on sensor The diagram here shows the connections For using the sensors with the Android App, there is a dedicated I2C library written in communication in Java for the communication. Each sensor has its own specific set of functionalities and therefore has its own library file. However, all these sensors share some common features like each one of them has a getRaw method which fetches the raw sensor data. For getting the data from a sensor, the sensor is initially connected to the PSLab board. The following piece of code is responsible for detecting any devices that are connected to the PSLab board through the I2C bus. Each sensor has it’s own unique address and can be identified using it. So, the AutoScan function returns the addresses of all the connected sensors and the sensors can be uniquely identified using those addresses. public ArrayList<Integer> scan(Integer frequency) throws IOException { if (frequency == null) frequency = 100000; config(frequency); ArrayList<Integer> addresses = new ArrayList<>(); for (int i = 0; i < 128; i++) { int x = start(i, 0); if ((x & 1) == 0) { addresses.add(i); } stop(); } return addresses; }   As per the addresses fetched, the sensor library corresponding to that particular sensor can be imported and the getRaw method can be called. The getRaw method will return the raw sensor data. For example here is the getRaw method of ADS1115. public int[] getRaw() throws IOException, InterruptedException { String chan = typeSelection.get(channel); if (channel.contains("UNI")) return new int[]{(int) readADCSingleEnded(Integer.parseInt(chan))}; else if (channel.contains("DIF")) return new int[]{readADCDifferential(chan)}; return new int[0]; } Here the raw data is returned in the form of voltages in mV. Similarly, the…

Continue ReadingUsing Sensors with PSLab Android App

Creating Custom Components in the PSLab Android App

PSLab Android App supports a lot of features and each of these features need components & views for their implementation. A typical UI of PSLab is shown in the figure below. Considering the number of views & components used in the figure, implementation of each view & component separately would lead to a huge volume of repetitive and inefficient code. As it is evident that the EditText and two buttons beside it keep repeating a lot, it is wiser to create a single custom component consisting of an EditText and two buttons. This not only leads to efficient code but also results in a drastic reduction of the volume of code. Android has a feature which allows creating components. For almost all the cases, the pre-defined views in Android serve our purpose of creating the UIs. However, sometimes there is a need to create custom components to reduce code volume and improve quality. Custom components are used when a particular set of component needed by us is not present in the Android view collection or when a pattern of components is frequently repeated or when we need to reduce the code complexity. The above set can be replaced by defining a custom component which includes an edittext and two buttons and then treating it like just any other component. To get started with creating a custom component, the steps are the following: Create a layout for the custom component to be designed <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="horizontal" android:layout_width="match_parent" android:layout_height="match_parent"> <Button android:id="@+id/button_control_plus" android:layout_width="0dp" android:layout_weight="0.5" android:layout_height="20dp" android:background="@drawable/button_minus" /> <EditText android:id="@+id/edittext_control" android:layout_width="0dp" android:layout_weight="2" android:layout_height="24dp" android:layout_marginTop="@dimen/control_margin_small" android:inputType="numberDecimal" android:padding="@dimen/control_edittext_padding" android:background="@drawable/control_edittext" /> <Button android:id="@+id/button_control_minus" android:layout_width="0dp" android:layout_weight="0.5" android:layout_height="20dp" android:background="@drawable/button_plus" /> </LinearLayout> The layout file edittext_control.xml is created with three views and each one of them has been assigned an ID along with all the other relevant parameters. Incorporate the newly created custom layout in the Activity/Fragment layout file <org.fossasia.pslab.others.Edittextwidget android:id="@+id/etwidget_control_advanced1" android:layout_height="wrap_content" android:layout_width="0dp" android:layout_weight="2" android:layout_marginLeft="@dimen/control_margin_small" android:layout_marginStart="@dimen/control_margin_small" /> The custom layout can be added the activity/fragment layout just like any other view and can be assigned properties similarly. Create the activity file for the custom layout public class Edittextwidget extends LinearLayout{ private EditText editText; private Button button1; private Button button2; private double leastCount; private double maxima; private double minima; public Edittextwidget(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); applyAttrs(attrs); } public Edittextwidget(Context context, AttributeSet attrs) { super(context, attrs); applyAttrs(attrs); } public Edittextwidget(Context context) { super(context); } public void init(Context context, final double leastCount, final double minima, final double maxima) { View.inflate(context, R.layout.edittext_control, this); editText = (EditText) findViewById(R.id.edittext_control); button1 = (Button) findViewById(R.id.button_control_plus); button2 = (Button) findViewById(R.id.button_control_minus); button1.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { Double data = Double.valueOf(editText.getText().toString()); data = data - leastCount; data = data > maxima ? maxima : data; data = data < minima ? minima : data; editText.setText(String.valueOf(data)); } }); button2.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { Double data = Double.valueOf(editText.getText().toString()); data = data + leastCount; data = data > maxima ? maxima : data; data = data < minima…

Continue ReadingCreating Custom Components in the PSLab Android App

Trigger Controls in Oscilloscope in PSLab

PSLab Desktop App has a feature of oscilloscope. Modern day oscilloscopes found in laboratories support a lot of advanced features and addition of trigger controls in oscilloscope was one such attempt in adding an advanced feature in the oscilloscope. As the current implementation of trigger is not robust enough, this feature would help in better stabilisation of waveforms. Captured waveforms often face the problem of distortion and trigger helps to solve this problem. Trigger in oscilloscope is an essential feature for signal characterisation.  as it synchronises the horizontal sweep of the oscilloscope to the proper point of the signal. The trigger control enables users to stabilise repetitive waveforms as well as capture single-shot waveforms. By repeatedly displaying similar portion of the input signal, the trigger makes repetitive waveform look static. In order to visualise how an oscilloscope looks with or without a trigger see the following figures below. Fig 1: (a) Without trigger  (b) With trigger The Fig:1(a) is the actual waveform received by the oscilloscope and it can be easily noticed that interpreting it is confusing due to the overlapping of multiple waveforms together. So, in Fig:1(b) the trigger control stabilises the waveforms and captures just one waveform. In general the commonly used trigger modes in laboratory oscilloscopes are:- Auto - This trigger mode allows the oscilloscope to acquire a waveform even when it does not detect a trigger condition. If no trigger condition occurs while the oscilloscope waits for a specific period (as determined by the time-base setting), it will force itself to trigger. Normal - The Normal mode allows the oscilloscope to acquire a waveform only when it is triggered. If no trigger occurs, the oscilloscope will not acquire a new waveform, and the previous waveform, if any, will remain on the display. Single - The Single mode allows the oscilloscope to acquire one waveform each time you press the RUN button, and the trigger condition is detected. Scan – The Scan mode continuously sweeps waveform from left to right. Implementing Trigger function in PSLab PSLab has a built in basic functionality of trigger control in the configure_trigger method in sciencelab.py. The method gets called when trigger is enabled in the GUI. The trigger is activated when the incoming wave reaches a certain voltage threshold and the PSLab also provides an option of either selecting the rising or falling edge for trigger. Trigger is especially useful in experiments handling waves like sine waves, square wave etc. where trigger helps to get a clear picture. In order to initiate trigger in the PSLab desktop app, the configure_trigger method in sciencelab.py is called. The configure_trigger method takes some parameters for input but they are optional. If values are not specified the default values are assumed. def configure_trigger(self, chan, name, voltage, resolution=10, **kwargs): prescaler = kwargs.get('prescaler', 0) try: self.H.__sendByte__(CP.ADC) self.H.__sendByte__(CP.CONFIGURE_TRIGGER) self.H.__sendByte__( (prescaler << 4) | (1 << chan)) if resolution == 12: level = self.analogInputSources[name].voltToCode12(voltage) level = np.clip(level, 0, 4095) else: level = self.analogInputSources[name].voltToCode10(voltage) level = np.clip(level, 0, 1023) if…

Continue ReadingTrigger Controls in Oscilloscope in PSLab

Using Open Layers 3 to Render a loklak Emoji Heatmap

In Emoji Heatmapper App I am implementing a heatmap with the help of Open Layers 3. Open Layers 3 contains a really handy class, ol.layer.Heatmap. In this blog post am going to tell you how the heatmap actually works in the backend. A heatmap is an impressive way to visualize data. For a given matrix of data in which each value is represented by a color. The heatmap implementation is usually expensive in computation terms: For each grid’s pixel we need to compute its color from a set of known values. It is not a feasible method to be implemented on the client side because map rendering would take so much of time. Open Layers 3 contains an easy-to-use class called ol.layer.Heatmap, which allows to render vector data as a heatmap. So how is this implemented? The ol.layer.Heatmap layer uses a smart estimation to the design which produces relatively good results and which is also fast. The steps can be outlined as: A gradient of colors is created as an image. Each value is rendered in a canvas as a blurred point using the default radius and gradient. This produces a canvas where the blurred points may overlap each other and create more fuzzy zones. Finally, an image is obtained from the canvas. The color is obtained from the previous image and the obtained color value may vary from 0 to 255. Example usage of ol.layer.Heatmap class: var heatMap = new ol.layer.Heatmap({ source: vector, blur: parseInt(15, 10), radius: parseInt(5, 10), opacity: 0.9, gradient: ['#0000ff', '#f00', '#f00', '#ff0', '#f00'] });   The colored image is then rendered in the map canvas, obtaining a nice effect suited to be used for density maps. The ol.layer.Heatmap offers some properties we can use to visualize the map in a better way. The properties include blur, radius, gradient, shadow and weight. This can be configured as per feature, according to the level of importance to each feature determining in more or less measure of the final color we want. Fig: Default colors of gradient property Fig: Used gradient property for different colors Resources Emoji-Heatmapper App: apps.loklak.org/emojiHeatmapper Open Layers 3’s Heatmap class: https://openlayers.org/en/latest/apidoc/ol.layer.Heatmap.html

Continue ReadingUsing Open Layers 3 to Render a loklak Emoji Heatmap

Open Event Server: Working with Migration Files

FOSSASIA's Open Event Server uses alembic migration files to handle all database operations and updations.  From creating tables to updating tables and database, all works with help of the migration files. However, many a times we tend to miss out that automatically generated migration files mainly drops and adds columns rather than just changing them. One example of this would be: def upgrade(): ### commands auto generated by Alembic - please adjust! ### op.add_column('session', sa.Column('submission_date', sa.DateTime(), nullable=True)) op.drop_column('session', 'date_of_submission') Here, the idea was to change the has_session_speakers(string) to is_session_speakers_enabled (boolean), which resulted in the whole dropping of the column and creation of a new boolean column. We realize that, on doing so we have the whole data under  has_session_speakers lost. How to solve that? Here are two ways we can follow up: op.alter_column: ---------------------------------- When update is as simple as changing the column names, then we can use this. As discussed above, usually if we migrate directly after changing a column in our model, then the automatic migration created would drop the old column and create a new column with the changes. But on doing this in the production will cause huge loss of data which we don’t want. Suppose we want to just change the name of the column of start_time to starts_at. We don’t want the entire column to be dropped. So an alternative to this is using op.alter_column. The two main necessary parameters of the op.alter_column is the table name and the column which you are willing to alter. The other parameters include the new changes. Some of the commonly used parameters are: nullable – Optional: specify True or False to alter the column’s nullability. new_column_name – Optional; specify a string name here to indicate the new name within a column rename operation. type_ – Optional: a TypeEngine type object to specify a change to the column’s type. For SQLAlchemy types that also indicate a constraint (i.e. Boolean, Enum), the constraint is also generated. autoincrement –  Optional: set the AUTO_INCREMENT flag of the column; currently understood by the MySQL dialect. existing_type– Optional: a TypeEngine type object to specify the previous type. This is required for all column alter operations that don’t otherwise specify a new type, as well as for when nullability is being changed on a column. So, for example, if you want to change a column name from “start_time” to “starts_at” in events table you would write: op.alter_column(‘events’, ‘start_time’, new_column_name=’starts_at’) def upgrade(): ### commands auto generated by Alembic - please adjust! ### op.alter_column('sessions_version', 'end_time', new_column_name='ends_at') op.alter_column('sessions_version', 'start_time', new_column_name='starts_at') op.alter_column('events_version', 'end_time', new_column_name='ends_at') op.alter_column('events_version', 'start_time', new_column_name='starts_at') Here, session_version and events_version are the tables name altering columns start_time to starts_at and end_time to ends_at with the op_alter_column parameter new_column_name. op.execute: -------------------- Now with alter_column, most of the alteration in the column name or constraints or types is achievable. But there can be a separate scenario for changing the column properties. Suppose I change a table with column “aspect_ratio” which was a string column and had values “on” and…

Continue ReadingOpen Event Server: Working with Migration Files

Implementing Search Feature In SUSI Web Chat

SUSI WebChat now has a search feature. Users now have an option to filter or find messages. The user can enter a keyword or phrase in the search field and all the matched messages are highlighted with the given keyword and the user can then navigate through the results. Lets visit SUSI WebChat and try it out. Clicking on the search icon on the top right corner of the chat app screen, we’ll see a search field expand to the left from the search icon. Type any word or phrase and you see that all the matches are highlighted in yellow and the currently focused message is highlighted in orange We can use the up and down arrows to navigate between previous and recent messages containing the search string. We can also choose to search case sensitively using the drop down provided by clicking on the vertical dots icon to the right of the search component. Click on the `X` icon or the search icon to exit from the search mode. We again see that the search field contracts to the right, back to its initial state as a search icon. How does the search feature work? We first make our search component with a search field, navigation arrow icon buttons and exit icon button. We then listen to input changes in our search field using onChange function, and on input change, we collect the search string and iterate through all the existing messages checking if the message contains the search string or not, and if present, we mark that message before passing it to MessageListItem to render the message. let match = msgText.indexOf(matchString); if (match !== -1) { msgCopy.mark = { matchText: matchString, isCaseSensitive: isCaseSensitive }; } We alse need to pass the message ID of the currently focused message to MessageListItem as we need to identify that message to highlight it in orange instead of yellow differentiating between all matches and the current match. function getMessageListItem(messages, markID) { if(markID){ return messages.map((message) => { return ( <MessageListItem key={message.id} message={message} markID={markID}        /> ); }); } } We also store the indices of the messages marked in the MessageSection Component state which is later used to iterate through the highlighted results. searchTextChanged = (event) => { let matchString = event.target.value; let messages = this.state.messages; let markingData = searchMsgs(messages, matchString, this.state.searchState.caseSensitive); if(matchString){ let searchState = { markedMsgs: markingData.allmsgs, markedIDs: markingData.markedIDs, markedIndices: markingData.markedIndices, scrollLimit: markingData.markedIDs.length, scrollIndex: 0, scrollID: markingData.markedIDs[0], caseSensitive: this.state.searchState.caseSensitive, open: false, searchText: matchString }; this.setState({ searchState: searchState }); } } After marking the matched messages with the search string, we pass the messages array into MessageListItem Component where the messages are processed and rendered. Here, we check if the message being received from MessageSection is marked or not and if marked, we then highlight the message. To highlight all occurrences of the search string in the message text, I used a module called react-text-highlight. import TextHighlight from 'react-text-highlight'; if(this.props.message.id === markMsgID){ markedText.push( <TextHighlight key={key} highlight={matchString} text={part} markTag='em' caseSensitive={isCaseSensitive} /> ); }…

Continue ReadingImplementing Search Feature In SUSI Web Chat

Processing Text Responses in SUSI Web Chat

SUSI Web Chat client now supports emojis, images, links and special characters. However, these aren’t declared as separate action types i.e the server doesn’t explicitly tell the client that the response contains any of the above features when it sends the JSON response. So the client must parse the text response from server and add support for each of the above mentioned features instead of rendering the plain text as is, to ensure good UX. SUSI Web Chat client parses the text responses to support : HTML Special Entities Images and GIFs URLs and Mail IDs Emojis and Symbols // Proccess the text for HTML Spl Chars, Images, Links and Emojis function processText(text){ if(text){ let htmlText = entities.decode(text); let imgText = imageParse(htmlText); let replacedText = parseAndReplace(imgText); return <Emojify>{replacedText}</Emojify>; }; return text; } Let us write sample skills to test these out. Visit http://dream.susi.ai/ and enter textprocessing. You can then see few sample queries and responses at http://dream.susi.ai/p/textprocessing. Lets visit SUSI WebChat and try it out. Query : dream textprocessing Response: dreaming enabled for textprocessing Query : text with special characters Response:  &para; Here are few “Special Characters&rdquo;! All the special entities notations have been parsed and rendered accordingly! Sometimes we might need to use HTML special characters due to reasons like You need to escape HTML special characters like <, &, or ". Your keyboard does not support the required character. For example, many keyboards do not have em-dash or the copyright symbol. You might be wondering why the client needs to handle this separately as it is generally, automatically converted to relevant HTML character while rendering the HTML. SUSI Web Chat client uses reactjs which has JSX and not HTML. So JSX doesn’t support HTML special characters i.e they aren’t automatically converted to relevant characters while rendering. Hence, the client needs to handle this explicitly. We used the module, html-entities to decode all types of special HTML characters and entities. This module parses the text for HTML entities and replaces them with the relevant character for rendering when used to decode text. import {AllHtmlEntities} from 'html-entities'; const entities = new AllHtmlEntities(); let htmlText = entities.decode(text); Now that the HTML entities are processed, the client then processes the text for image links. Let us now look at how images and gifs are handled. Query : random gif Response: https://media1.giphy.com/media/AAKZ9onKpXog8/200.gif Sometimes, the text contains links for images or gifs and the user would be expecting a media type like image or gif instead of text. So we need to replace those image links with actual images to ensure good UX. This is handled using regular expressions to match image type urls and correspondingly replace them with html img tags so that the response is a image and not URL text. // Parse text for Image URLs function imageParse(stringWithLinks){ let replacePattern = new RegExp([ '((?:https?:\\/\\/)(?:[a-zA-Z]{1}', '(?:[\\w-]+\\.)+(?:[\\w]{2,5}))', '(?::[\\d]{1,5})?\\/(?:[^\\s/]+\\/)', '*(?:[^\\s]+\\.(?:jpe?g|gif|png))', '(?:\\?\\w+=\\w+(?:&\\w+=\\w+)*)?)' ].join(''),'gim'); let splits = stringWithLinks.split(replacePattern); let result = []; splits.forEach((item,key)=>{ let checkmatch = item.match(replacePattern); if(checkmatch){ result.push( <img key={key} src={checkmatch} style={{width:'95%',height:'auto'}} alt=''/>) } else{…

Continue ReadingProcessing Text Responses in SUSI Web Chat

Preparing a release for Phimpme Android

Most of the essential features are now in a stable state in our Phimpme Android app. So we decided to release a beta version of the app. In FOSSASIA we follow branch policy where in development all current development will take place and in master branch the stable code resides. Releasing an app is not just building an apk and submitting to the distribution platform, certain guidelines should follow. How I prepare a released apk for Phimpme List down the feature We discussed on our public channel what features are now in stable state and can be released. Features such as account manager and Share Activity is excluded because it is not complete and in under development mode. We don’t want to show and under development feature. So excluded them. And made a list of available features in different category of Camera, Gallery and Share. Follow the branch policy. The releasable and stable codebase should be on master branch. It is good to follow the branch policy because it helps if we encounter any problem with the released apk. We can directly go to our master branch and check there. Development branch have very volatile because of active development going on. Every Contributor’s contribution is important When we browse our old branches such as master in case of ours. We see generally it is behind 100s of commits to the development. In case of that when we create a PR for that then it generally contains all the old commits to make the branch up to the latest. In this case while opening and merging do not squash the commits. Testing from Developer’s end Testing is very essential part in development. Before releasing it is a good practice that Developer should test the app from their end. We tested app features in different devices with varying Android OS version and screen size. If there is any compatibility issue, report it right away and there are several tools in Android to fix. Support variety of devices and screen sizes Changing package name, application ID Package name, application ID are vitals of an app. Which uniquely identifies them as app in world. Like I changed the package name of Phimpme app to org.fossasia.phimpme. Check all the permission app required. Create Release build type Build types are great to way categorize the apps. Debug and Release are two. There are various things in codebase which we want only in Debug modes. So when we create Release mode it neglect that part of the code. Add build types in you application build.gradle buildTypes {   release {       minifyEnabled false   } Rebuild app again and verify from the left tab bar Generate Signed apk and Create keystore (.jks) file Navigate to Build → Generate Signed APK Fill all details and proceed further to generate the signed apk in your home directory. Adding Signing configurations in build.gradle Copy the keystore (.jks) file to the root of the project and add signing configurations signingConfigs {       config {           keyAlias 'phimpme'…

Continue ReadingPreparing a release for Phimpme Android

Sentiment Data in Emoji-Heatmapper loklak App

Analysing emojis can uncover meaning and sentiment in ways regular text analytics cannot. So this was the main idea to introduce sentiment data into the Emoji-Heatmapper app. LokLak Search API has features such as classification and categorization of tweets. The emotions, for instance, can be joy, anticipation, sad etc. So, in the Emoji-heatmapper app, I am displaying the occurrence of emojis on the map according to the location traced and also the sentiment related to the emoji i.e., search query as follows: How to get the sentiment data: One should simply enter the emoji into the search box for the results. The following code shows part of the LokLak Search API results (JSONObject): "hashtags_count": 0, "classifier_emotion": "anger", "classifier_emotion_probability": 1.2842921170985733E-9, "classifier_language": "english", "classifier_language_probability": 1.4594549568869297E-8, "without_l_len": 49, "without_lu_len": 49, "without_luh_len": 49, I am using the above field name ”classifier_emotion” to display the results. Till here getting the data relevant to query part is done. Next, the classifier_emotion of each tweet containing the query is collected into an array and sorted to get a unique list. var emotion = []; var emotions_array = []; for (var i = 0; i < tweets.statuses.length; i++) { if (tweets.statuses[i].classifier_emotion) { emotion = tweets.statuses[i].classifier_emotion; emotions_array.push(emotion); } emotions_array.sort(); emotion_array = jQuery.unique( emotions_array ); Loading the Sentiment data onto the Screen When the query has a single emotion, or if multiple emotions or no emotions. These use cases/situations are displayed as follows: Fig: Single Emotion Fig: Multiple Emotions Fig: No Emotions data The code which creates the data dynamically on the output screen is as follows: //Loading the sentiment $(document).ready(function() { var listItems= ""; if (emotions_array.length == 0) { listItems = "No Sentiment data is available for " + query } if (emotion_array.length == 1) { listItems += "<h3> Sentiment of " + query + " is "; } else if (emotion_array.length > 1) { listItems += "<h3> Sentiments of " + query + " are "; } var emotion_data = emotion_array.join(", ") + "." listItems += emotion_data + "</h3>" $("#sentiment").html(listItems); }); Conclusion The Emoji-Heatmapper app displays the sentiment data of the query being searched for which populates data dynamically using LokLak Search API. Resources Emoji-Heatmapper App, try it out here: http://apps.loklak.org/emojiHeatmapper/ Source Code: https://github.com/fossasia/apps.loklak.org/tree/master/emojiHeatmapper Search API: http://api.loklak.org/

Continue ReadingSentiment Data in Emoji-Heatmapper loklak App