In this blog I’ll be explaining how to view, edit and delete connected devices from SUSI.AI webclient. To connect a device open up the SUSI.AI android app, and fill the details accordingly. Device can also be connected by logging in to your raspberry pi. Once the devices is connected you can edit, delete and access specific features for the device from the web client.
My Devices
All the connected devices can be viewed in My Devices tab in the Dashboard. In this tab all the devices connected to your account are listed in a table along with their locations on the map. Each device table row has three action buttons – view, edit and delete. Clicking on the view button takes to device specific page. Clicking on the edit button makes the fields name and room editable in table row. Clicking on the delete button opens a confirm with input dialog. Device can be deleted by entering the device name and clicking on delete.
To fetch all the device getUserDevices action is dispatched on component mounting which sets the reducer state devices in settings reducer. initialiseDevices function is called after all the devices are fetched from the server. This function creates an array of objects of devices with name, room, macId, latitude, longitude and location.
componentDidMount() {
const { accessToken, actions } = this.props;
if (accessToken) {
actions
.getUserDevices()
.then(({ payload }) => {
this.initialiseDevices();
this.setState({
loading: false,
emptyText: 'You do not have any devices connected yet!',
});
})
.catch(error => {
this.setState({
loading: false,
emptyText: 'Some error occurred while fetching the devices!',
});
console.log(error);
});
}
document.title =
'My Devices - SUSI.AI - Open Source Artificial Intelligence for Personal Assistants, Robots, Help Desks and Chatbots';
}
initialiseDevices = () => {
const { devices } = this.props;
if (devices) {
let devicesData = [];
let deviceIds = Object.keys(devices);
let invalidLocationDevices = 0;
deviceIds.forEach(eachDevice => {
const {
name,
room,
geolocation: { latitude, longitude },
} = devices[eachDevice];
let deviceObj = {
macId: eachDevice,
deviceName: name,
room,
latitude,
longitude,
location: `${latitude}, ${longitude}`,
};
if (
deviceObj.latitude === 'Latitude not available.' ||
deviceObj.longitude === 'Longitude not available.'
) {
deviceObj.location = 'Not found';
invalidLocationDevices++;
} else {
deviceObj.latitude = parseFloat(latitude);
deviceObj.longitude = parseFloat(longitude);
}
devicesData.push(deviceObj);
});
this.setState({
devicesData,
invalidLocationDevices,
});
}
};
Device Page
Clicking on the view icon button in my devices redirects to mydevices/:macId. This page consists of device information in tabular format, local configuration settings and location of the device on the map. User can edit and delete the device from actions present in table. Local configuration settings can be accessed only if the user is logged in the local server.
Edit Device
To edit a device click on the edit icon button in the actions column of the table. The name and room field become editable.On changing the values handleChange function is called which updates the devicesData state. Clicking on the tick icon saves the new details by calling the onDeviceSave function. This function class the addUserDevice api which takes in the new device details.
To delete a device click on the delete icon button under the actions column in the table. Clicking on the delete device button opens up the confirm with input dialog modal. Type in the name of the device and click on delete. Clicking on delete calls the handeRemoveDevice function which calls the removeUserDeviceapi which takes in the macId. On deleting the device user is redirected to the My Devices in Dashboard.
In conclusion, My Devices tab in dashboard helps you manage the devices connected with your account along with specific device configuration. Now the users can edit, view and delete their connected devices.
The open event attendee is an android app which allows users to discover events happening around the world using the Open Event Platform. It consumes the APIs of the open event server to get a list of available events and can get detailed information about them. It deals with events based on location, but we have to take the location as an input from the user. While in many cases, we have to search for events on our current location only. To make this work, I have added a current location option, where the app will get our location and search for nearby events. Earlier we had to enter our current location as well to search nearby events.
Model–view–viewmodel is a software architectural pattern. MVVM facilitates separation of development of the graphical user interface – be it via a markup language or GUI code – from the development of the business logic or back-end logic (the data model).
Why Model-view-ViewModel?
Setup Geo location View Model
Configure location feature with MVVM
Conclusion
Resources
Let’s analyze every step in detail.
Advantages of using Model-view-ViewModel
A clean separation of different kinds of code should make it easier to go into one or several of those more granular and focused parts and make changes without worrying.
External and internal dependencies are in separate pieces of code from the parts with the core logic that you would like to test.
Observation of mutable live data whenever it is changed.
Setup the Geolocation view model
Created new kotlin class name GeoLocationViewModel which contains a configure function for current location:
privatefuncheckLocationPermission() {
val permission = context?.let {
ContextCompat.checkSelfPermission(it, Manifest.permission.ACCESS_COARSE_LOCATION) }
if (permission != PackageManager.PERMISSION_GRANTED) {
requestPermissions(arrayOf(Manifest.permission.ACCESS_COARSE_LOCATION,
Manifest.permission.ACCESS_FINE_LOCATION), LOCATION_PERMISSION_REQUEST)
}
}
Check for device location is enabled, if not send an intent to turn location on. The method is written inside configure function:
val service = activity.getSystemService(Context.LOCATION_SERVICE)
var enabled = falseif (service is LocationManager) enabled = service.isProviderEnabled(LocationManager.NETWORK_PROVIDER)
if (!enabled) {
val intent = Intent(Settings.ACTION_LOCATION_SOURCE_SETTINGS)
activity.startActivity(intent)
return
}
Now create Mutable live data for current location inside the view model class :
privateval mutableLocation = MutableLiveData<String>()
val location: LiveData<String> = mutableLocation
Now implement location request and location callback inside configure method:
val locationRequest: LocationRequest = LocationRequest.create()
locationRequest.priority = LocationRequest.PRIORITY_LOW_POWER
val locationCallback = object : LocationCallback() {
overridefunonLocationResult(locationResult: LocationResult?) {
if (locationResult == null) {
return
}
for (location in locationResult.locations) {
if (location != null) {
val latitude = location.latitude
val longitude = location.longitude
try {
val geocoder = Geocoder(activity, Locale.getDefault())
val addresses: List<Address> = geocoder.getFromLocation(latitude, longitude, maxResults)
for (address: Address in addresses) {
if (address.adminArea != null) {
mutableLocation.value = address.adminArea
}
}
} catch (exception: IOException) {
Timber.e(exception, "Error Fetching Location")
}
}
}
}
}
Now call location service inside configure method:
So, essentially the Eventyay Attendee should have this feature to show all the events nearby me or in my city, although the app was already doing the job, but we had to manually select the city or locality we wish to search, now after the addition of dedicated current location option, the app will be more user friendly and automated.
If you’re worried about privacy concerns when using smart assistants or just want to build your own one with complete freedom then this guide will help you. SUSI.AI provides Artificial Intelligence for Smart Speakers, Personal Assistants, Robots, Help Desks and Chatbots. SUSI.AI is a completely free and open source software.
In this guide, we will be building our own smart speaker assistant which will talk to the user just like Alexa or Google home. The keyword will be “SUSI”.
If you’re on Linux, open up a terminal window and go to your downloads folder(or the place where the image is downloaded) and type the following commands
Extract the image
unxz susibian-<timestamp>.img.xz
Example:
unxz susibian-201905170311.img.xz
Write the image to SD-card
Find the Disk Name of your SD card device by typing lsblk into a terminal window.
Replace <path_to_downloaded_image_file> to path to susibian image.
Replace <disk_name> with the disk name found in the before mentioned step.
NOTE: In the example command above, sdc is the device name for my scenario only, please check your device name before executing the command as it can result in loss of data!
In the app, go to settings -> Devices -> click here to move to device setup screen
Wait for your device to show up and then click on setup.
Chose the wifi network you want your speaker to connect to and enter the password.
Add credentials.
OR
Connect your computer or mobile phone to the SUSI.AI hotspot using the password “password”.
Open http://10.0.0.1:5000 which will show you the set-up page as visible below:
Put in your Wifi credentials. For an open network set an empty password. The device should connect automatically to any open network, leave SSID and password empty.
Click on “Reboot Smart Speaker”
Testing
Wait for reboot of the speaker, SUSI will say “SUSI has started” as soon it is ready.
After the setup, three LEDs on top of ReSpeaker Hat should light up.
In Eventyay Attendee, ordering tickets for events has always been a core functionality that we focus on. When ordering tickets, adding a time counter to make a reservation and release tickets after timeout is a common way to help organizers control their tickets’ distribution and help users save up their tickets. Let’s take a look at how to implement this feature
Implementing the time counter
Some notes on implementing time counter
Conclusion
Resources
INTEGRATING TIME COUNTER TO YOUR SYSTEM
Step 1: Create the UI for your time counter. In here, we made a simple View container with TextView inside to update the time.
Step 2: Set up the time counter with Android CountdownTimer with the total countdown time and the ticking time. In Eventyay, the default countdown time is 10 minutes (600,000 ms) with the ticking time is (1,000 ms), which means the UI is updated every one second.
private fun setupCountDownTimer(orderExpiryTime: Int) {
rootView.timeoutCounterLayout.isVisible = true
rootView.timeoutInfoTextView.text =
getString(R.string.ticket_timeout_info_message, orderExpiryTime.toString())
val timeLeft: Long = if (attendeeViewModel.timeout == -1L) orderExpiryTime * 60 * 1000L
else attendeeViewModel.timeout
timer = object : CountDownTimer(timeLeft, 1000) {
override fun onFinish() {
findNavController(rootView).navigate(AttendeeFragmentDirections
.actionAttendeeToTicketPop(safeArgs.eventId, safeArgs.currency, true))
}
override fun onTick(millisUntilFinished: Long) {
attendeeViewModel.timeout = millisUntilFinished
val minutes = millisUntilFinished / 1000 / 60
val seconds = millisUntilFinished / 1000 % 60
rootView.timeoutTextView.text = "$minutes:$seconds"
}
}
timer.start()
}
Step 3: Set up creating a pending order when the timer starts counting so that users can hold a reservation for their tickets. A simple POST request about empty order to the API is made
fun initializeOrder(eventId: Long) {
val emptyOrder = Order(id = getId(), status = ORDER_STATUS_INITIALIZING, event = EventId(eventId))
compositeDisposable += orderService.placeOrder(emptyOrder)
.withDefaultSchedulers()
.subscribe({
mutablePendingOrder.value = it
orderIdentifier = it.identifier.toString()
}, {
Timber.e(it, "Fail on creating pending order")
})
}
Step 4: Set up canceling order when the time counter finishes. As time goes down, the user should be redirected to the previous fragment and a pop-up dialog should show with a message about reservation time has finished. There is no need to send an HTTP request to cancel the pending order as it is automatically handled by the server.
Step 5: Cancel the time counter in case the user leaves the app unexpectedly or move to another fragment. If this step is not made, the CountdownTimer still keeps counting in the background and possibly call onFinished() at some point that could evoke functions and crash the app
override fun onDestroy() {
super.onDestroy()
if (this::timer.isInitialized)
timer.cancel()
}
RESULTS
CONCLUSION
For a project with a ticketing system, adding a time counter for ordering is a really helpful feature to have. With the help of Android CountdownTimer, it is really to implement this function to enhance your user experience.
SkillCreator in SUSI.AI used for creating new skills and Botbuilder for creating new bots had parent-child communication as seen in React components. Many components were using a state, which was lifted up to its parent and passed to its children (Design, Configure, Deploy, SkillCreator) as props. This architecture has many problems, maintainability and readability being the prime one.
For maintaining the state of SkillCreator and Botbuilder we used Redux, which lets us use global state and we don’t have to worry about passing props just for the purpose of sending data deep down. With Redux, the code size reduced and we successfully eliminated a lot of code redundancy.
Basic Data Flow in Redux
A UI Event like onClick happens
The UI event dispatches an action, mapDispatchToAction gives access to actions defined in action files.
Perform synchronous or asynchronous tasks like fetching data from external API.
The payload is used by reducer function, which updates the global store based on payload and previous state.
The global store changes and using the mapStateToProps present, the function is subscribed to changes over time and gets the updated values of the store.
Code Integration
The state object consists of 3 nested objects. Each view, .i.e on botWizard, for Build tab we update the skill object. For the Design tab, design object and etc. The objective of storing in Redux store is, we require the data to persist when a user navigates/change to other tabs for example Design Tab, the data of skills tab is mapped from the store so if user switches back to skills tab, no data loss will be there. All such required fields are stored in the store.
Default States:
const defaultState = {
skill: {
name:'',
file:null,
category:null,
language:'',
image: avatarsIcon,
imageUrl:'<image_name>',
code:'::name <Bot_name>\n::category <Category>\n::language <Language>\n::author <author_name>\n::author_url <author_url>\n::description <description> \n::dynamic_content <Yes/No>\n::developer_privacy_policy <link>\n::image images/<image_name>\n::terms_of_use <link>\n\n\nUser query1|query2|quer3....\n!example:<The question that should be shown in public skill displays>\n!expect:<The answer expected for the above example>\nAnswer for the user query',
author:'',
},
design: {
botbuilderBackgroundBody:'#ffffff',
botbuilderBodyBackgroundImg:'',
botbuilderUserMessageBackground:'#0077e5',
botbuilderUserMessageTextColor:'#ffffff',
botbuilderBotMessageBackground:'#f8f8f8',
botbuilderBotMessageTextColor:'#455a64',
botbuilderIconColor:'#000000',
botbuilderIconImg: botIcon,
code:'::bodyBackground #ffffff\n::bodyBackgroundImage \n::userMessageBoxBackground #0077e5\n::userMessageTextColor #ffffff\n::botMessageBoxBackground #f8f8f8\n::botMessageTextColor #455a64\n::botIconColor #000000\n::botIconImage ',
},
configCode:"::allow_bot_only_on_own_sites no\n!Write all the domains below separated by commas on which you want to enable your chatbot\n::allowed_sites \n!Choose if you want to enable the default susi skills or not\n::enable_default_skills yes\n!Choose if you want to enable chatbot in your devices or not\n::enable_bot_in_my_devices no\n!Choose if you want to enable chatbot in other user's devices or not\n::enable_bot_for_other_users no",
view:'code',
loading:true,
The CREATE_SET_VIEW reducer function updates the view, e.g it can be ‘code’, ‘ui’, ‘tree’. The action setView is dispatched to update the view tab. The action setSkillData is dispatched when we want to update the skill object in our global store.Reducer function for it:
The setSkillData is called whenever the user updates the code of skills.
Updating Design Code
The setDesignData is dispatched whenever we update the code in AceEditor of design tab. generateDesignData function helps in generating design object from the designCode to update the UI view of design tab.
The reducer generates the new designCode and also sets the component color for it to be used in UI View. Similar logic is applied for handling the configured state in the reducer.
To conclude, shifting to redux architecture instead of prop drilling proved to be of great advantage. The code became more performant, modular and easy to manage. The state persisted even after component unmounted.
The eventyay attendee is an android app which allows users to discover events happening around the world using the Open Event Platform. It consumes the APIs of the open event server to get a list of available events and can get detailed information about them.
The project earlier used Fragment transition to handle fragment with multiple activities. Multiple activities are used because it is very complex to handle all fragment with a single activity using fragment transition. If we try to make a single activity application with fragment transition, it contains a lot of bugs with the back stack.
But we can use Navigation Architecture Component to efficiently make Eventyay attendee a single activity. It is the perfect technology to handle all fragments and authentication in a single activity with no bigs or back stack.
Issues with fragment transition and multiple activities
Why navigation architecture component?
Process for using Navigation Architecture Component in the application
Animation of fragments with Navigation Architecture Component
Conclusion
Resources
Let’s analyze every step in detail.
Issues with fragment transition and multiple activities
Need to handle back stack on each Fragment Transition
Hard to debug
This can leave your app in an unknown state when receiving multiple click events or during configuration changes.
Fragment instances can be created
Multiple activities make the app slower due to using multiple intents
Why navigation architecture component?
Handling Fragment transactions
Handling Up and Back actions correctly by default
Providing standardized resources for animations and transitions
Treating deep linking as a first-class operation
Including Navigation UI patterns, such as navigation drawers and bottom navigation, with minimal additional work
Providing type safety when passing information while navigating
Visualizing and editing navigation graphs with Android Studio’s Navigation Editor
Steps involved in using Navigation Architecture Component in the application
Setup the bottom navigation menu with navigation controller:
// import require librariesimport androidx.navigation.fragment.NavHostFragment
import androidx.navigation.ui.NavigationUI.setupWithNavController
// Set nav controller with host fragment using Kotlin smart castval hostFragment = supportFragmentManager.findFragmentById(R.id.frameContainer)
if (hostFragment is NavHostFragment)
navController = hostFragment.navController
setupWithNavController(navigationMenu, navController)
Navigate fragment for bottom naviagtion whenever required:
navController.navigate(R.id.munuItemId)
Navigate any fragment which is added in navigation graph with arguments:
Single activity application is the future of Android and many apps are shifting towards single activity Architecture from multiple activity ones. This brings in ease to use and smooth user experience. In shifting to a single activity, we have used Navigation Architecture Component which is much more efficient than Fragment transition which has many bugs associated with it while converting the app to a single activity.
In this blog, we will look at a very useful and important feature provided by Android – AsyncTask and more importantly how AsyncTasks have been put to use for various functionalities throughout the PSLab Android Project
What are Threads?
Threads are basically paths of sequential execution within a process. In a way, threads are lightweight processes. A process may contain more than one threads and all these threads are executed in parallel. Such a method is called “Multithreading”. Multithreading is very useful when some long tasks need to be executed in the background while other tasks continue to execute in the foreground.
Android has the main UI thread which works continuously and interacts with a user to display text, images, listen for click and touch, receive keyboard inputs and many more. This thread needs to run without any interruption to have a seamless user experience.
When AsyncTask comes into the picture?
AsyncTask enables proper and easy use of the UI thread. This class allows you to perform background operations and publish results on the UI thread without having to manipulate threads and/or handlers.
In PSLab Android application, we communicate with PSLab hardware through I/O(USB) interface. We connect the PSLab board with the mobile and request and wait for data such as voltage values and signal samples and once the data is received we display it as per requirements. Now clearly we can’t run this whole process on the main thread because it might take a long time to finish and because of that other UI tasks would be delayed which eventually degrade the user experience. So, to overcome this situation, we use AsyncTasks to handle communication with PSLab hardware.
Methods of AsyncTask
AsyncTask is an Abstract class and must be subclassed to use. Following are the methods of the AsyncTask:
onPreExecute()
Used to set up the class before the actual execution
doInBackground(Params…)
This method must be overridden to use AsyncTask. This method contains the main part of the task to be executed. Like the network call etc.
The result from this method is passed as a parameter to onPostExecute() method
onProgressUpdate(Progress…)
This method is used to display the progress of the AsyncTask
onPostExecute(Result)
Called when the task is finished and receives the results from the doInBackground() method
There are 3 generic types passed to the definition of the AsyncTask while inheriting. The three types in order are
Params: Used to pass some parameters to doInBackground(Params…) method of the Task
Progress: Defines the units in which the progress needs to be displayed/
Result : Defines the data type to be returned from onInBackground() and receive as a parameter in the onPostExecute(Result) method
Example of the usage of the AsyncClass is as under :
privateclassSampleTaskextends AsyncTask<Params, Progress, Result>{@Overrideprotected Result doInBackground(Params... params){// The main code goes herereturn result;}@OverrideprotectedvoidonProgressUpdate(Progress... progress){// display the progress}@OverrideprotectedvoidonPostExecute(Result result){// display the result}}
We can create an instance of this class as under and execute it.
We can cancel a running class by calling the task.cancel() function
sampleTask.cancel()
AsyncTask in PSLab Android Application
As mentioned earlier some task which takes a lot of time, can’t be executed on the main thread. Hence in such cases AsyncTask is used. We will look into some examples where AsyncTask has been put to use in PSLab Android Application
Delete All Logs:
In the DataLoggerActivity, user has an option to delete all the logs that have been saved on the local storage. Now there might be a lot number of log files that needs to be deleted. Hence it is better to use AsyncTask for these. The code snippet for this is below,
As can be seen, we look for all the stored logs, and then delete each file one after another in doInBackground(). Once all the files are deleted, onPostExecute() is called, where we make the progress bar disappear. So, this how AsyncTask is used to implement deleteAllFiles feature.
Capture Task and Fourier Transform Output of Signals in Oscilloscope.
To display the generated signal in the oscilloscope, we call captureTraces() and fetchTraces functions from the ScienceLab class. Now, both these functions communicate with the PSLab Board, request for data, receives the data, manipulates it into the desired format and then display the signal on the Oscilloscope screen. Now clearly we can’t afford to run such a process on the main thread. So we use AsyncTask to handle it.
In the Oscilloscope, there is a feature to see the fourier transform output of the signal generated by the oscilloscope. Now to generate the Fourier Transform Output of the signal, we use the Fast Fourier Transform method. The time complexity of FFT (Fast Fourier Transform) is O(Nlog(N)), where N is the number of samples of the input signal. Now even if FFT is fast, we can risk to run this function on the main Thread. So once again we get help from AsyncTask. Both of these functionalities are included in same AsyncTask Class called captureTask A snippet for this task can be seen below,
In this part of the capture Task class, we use the captureTrace() and fetchTrace() function to get the signal samples and then store them into the data variable. Below is the part where we use call the fft() for the input signal.
This is a very simple part where we just call the Fast Fourier Transfer function is the user has selected to see the fourier transform output. The implementation of the Fourier function can be seen below,
public Complex[]fft(Complex[] input){
Complex[] x = input;int n = x.length;if(n ==1)returnnew Complex[]{x[0]};// if only single element, return as it isif(n %2!=0){
x = Arrays.copyOfRange(x,0, x.length-1);//No of samples should be even for this function to run, so i case of odd samples we remove the last element. This doesn’t affect the output significantly}
Complex[] halfArray =new Complex[n /2];for(int k =0; k < n /2; k++){
halfArray[k]= x[2* k];// Array of input terms at even places}
Complex[] q = fft(halfArray);// recursive call for even termsfor(int k =0; k < n /2; k++){
halfArray[k]= x[2* k +1];// Array of terms at odd places}
Complex[] r = fft(halfArray);// recursive call for odd terms
Complex[] y =new Complex[n];// Array of final outputfor(int k =0; k < n /2; k++){double kth =-2* k * Math.PI/ n;
Complex wk =new Complex(Math.cos(kth), Math.sin(kth));// “kernel” for kth term is the output (based on nth root of unity)if(r[k]==null){
r[k]=new Complex(1);// exception handling}if(q[k]==null){
q[k]=new Complex(1);// exception handling}
y[k]= q[k].add(wk.multiply(r[k]));// kth term will be addition of odd and even terms
y[k + n /2]= q[k].subtract(wk.multiply(r[k]));// (k + n/2)th term will be subtraction of odd and even terms}return y;// rsultant array}
This is a classic implementation of Fast Fourier Transform. We divide the samples of input into odd and even placed terms and call the same function recursively until there is only one term left. After that we use nth (n being the number of samples) complex root of unity, we combine the results of odd termed fft() and even termed fft() to get the final output. Since at each iteration we are breaking the input into half it will run for O(logN) time and to merge the odd and even termed output we run a loop in each iteration on the O(N). So the total complexity would be O(NlogN), and since it might take longer to compute the fourier transform for large input we require it to be inside the AsyncTask and not on the main thread.
There are many other functionalities throughout the app, where AsyncTask has been used. In a nutshell, AsyncTask is a very useful method to handle longer tasks off the main thread.
This blog post will elaborate on how Tax Information is being displayed on the public page of an event. In current implementation, the user gets to know the total tax inclusive amount only after he/she decides to place an order but no such information was given to them on the public ticket page itself.
Example : In initial implementation, the user gets to know that the order is of only $120 and no information is given about the additional 30% being charged and taking the total to $156.
To tackle this issue, I added two hybrid components to the ticket object to handle the two tax cases :
Inclusion in the price : In European and Asian Countries , the tax amount is included in the ticket price itself. For this case, I created the following parameter to store the tax amount included in gross amount.
Added on the ticket price : In basic US tax policy, the tax amount is added on top of the ticket price. For such cases I have added a new attribute to ticket model which calculates the total amount payable for that particular ticket with tax inclusion
Hence making the new public ticket list display to look like this in case of tax amount inclusion and additional charge as follows
Discount Code application cases:
In the cases when a user applies the discount code, the ticket price need to be updated, hence, the tax applied has to be updated accordingly. I achieved this by updating the two computed properties of the ticket model on each togglePromotionalCode and applyPromotionalCode action. When a promotional code is applied, the appropriate attribute is updated according to the discount offered.
Similarly, on toggling the discount code off, the ticket’s computed properties are set back to their initial value using the same formula kept during the time of initialization which has been achieved in the following manner.
// app/components/public/ticket-list.js this.discountedTickets.forEach(ticket => { let taxRate = ticket.get('event.tax.rate'); let ticketPrice = ticket.get('price'); if (taxRate && !this.showTaxIncludedMessage) { let ticketPriceWithTax = ticketPrice * (1 + taxRate / 100); ticket.set('ticketPriceWithTax', ticketPriceWithTax); } else if (taxRate && this.showTaxIncludedMessage) { let includedTaxAmount = (taxRate * ticketPrice) / (100 + taxRate); ticket.set('includedTaxAmount', includedTaxAmount); } ticket.set('discount', 0); });
This particular change makes sure that the tax amount is calculated properly as per the discounted amount and thus eliminates the possibility of overcharging the attendee.
In conclusion, this feature has been implemented keeping in mind the consumer’s interest in using the Open Event Frontend and the ease of tax application on the public level with minimum required network requests.
In Eventyay Attendee, searching for events has always been a core function that we focus on. When searching for events based on location, autosuggestion based on user input really comes out as a great feature to increase the user experience. Let’s take a look at the implementation
Why using Mapbox?
Integrating places autosuggestion for searching
Conclusion
Resources
WHY USING MAPBOX?
There are many Map APIs to be taken into consideration but we choose Mapbox as it is really to set up and use, good documentation and reasonable pricing for an open-source project compared to other Map API.
INTEGRATING PLACES AUTOSUGGESTION FOR SEARCHING
Step 1: Setup dependency in the build.gradle + the MAPBOX key
Step 2: Set up functions inside ViewModel to handle autosuggestion based on user input:
private fun loadPlaceSuggestions(query: String) {
// Cancel Previous Call
geoCodingRequest?.cancelCall()
doAsync {
geoCodingRequest = makeGeocodingRequest(query)
val list = geoCodingRequest?.executeCall()?.body()?.features()
uiThread { placeSuggestions.value = list }
}
}
private fun makeGeocodingRequest(query: String) = MapboxGeocoding.builder()
.accessToken(BuildConfig.MAPBOX_KEY)
.query(query)
.languages("en")
.build()
Based on the input, the functions will update the UI with new inputs of auto-suggested location texts. The MAPBOX_KEY can be given from the Mapbox API.
Step 3: Create an XML file to display autosuggestion strings item and set up RecyclerView in the main UI fragment
Step 4: Set up ListAdapter and ViewHolder to bind the list of auto-suggested location strings. Here, we use CamenFeature to set up with ListAdapter as the main object. With the function .placeName(), information about the location will be given so that ViewHolder can bind the data
class PlaceSuggestionsAdapter :
ListAdapter<CarmenFeature,
PlaceSuggestionViewHolder>(PlaceDiffCallback()) {
var onSuggestionClick: ((String) -> Unit)? = null
override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): PlaceSuggestionViewHolder {
val itemView = LayoutInflater.from(parent.context)
.inflate(R.layout.item_place_suggestion, parent, false)
return PlaceSuggestionViewHolder(itemView)
}
override fun onBindViewHolder(holder: PlaceSuggestionViewHolder, position: Int) {
holder.apply {
bind(getItem(position))
onSuggestionClick = this@PlaceSuggestionsAdapter.onSuggestionClick
}
}
class PlaceDiffCallback : DiffUtil.ItemCallback<CarmenFeature>() {
override fun areItemsTheSame(oldItem: CarmenFeature, newItem: CarmenFeature): Boolean {
return oldItem.placeName() == newItem.placeName()
}
override fun areContentsTheSame(oldItem: CarmenFeature, newItem: CarmenFeature): Boolean {
return oldItem.equals(newItem)
}
}
}
Place Autocorrection is a really helpful and interesting feature to include in your next project. With the help of Mapbox SDK, it is really easy to implement to enhance your user experience in your application.
Google analytics provides SUSI.AI Admins with a way to analyze traffic and get advanced metrics. Google Analytics first collects data, computes the data, and showcases it on console dashboard. It is used for keeping track of user behavior on the website.
How Google Analytics Work
Below shown are fields used by Google Analytics to get user data. A cookie is stored into user browser. _ga stays in the browser for 2 years, _gid for 2 days.
Whenever a user performs an event like a mouse click, page change, open popup, add query strings to URL, information is sent to Google Analytics using an API call describing the user event. Below is the photo describing it:
The above-bordered boxes consist of information sent by google analytics. The information consists of:
The user identification code
The device resolution of screen used by a user
User language
The URL of the page user is on
How it processes data
When a user with tracking code lands on SUSI.AI, Google Analytics creates a unique random identity and attaches it to the user cookie.
Each new user is given a unique ID. Whenever a new ID is detected, analytics considers it as a new user. But when an existing ID is detected, it’s considered as a returning user and set over with the hit.
A unique ID code is fetched from every new unique user. Whenever a new user ID is detected in the call, Google Analytics treats the unique ID as a new user. If the ID matches from earlier ID, the user is a returning user and calculates the metrics another way. Each new user gets a unique ID.
However, a new user is detected if the same user clears out the browser cookie or uses another device over the same IP address to view the webpage.
When an existing ID is detected, it’s considered as a returning user and set over with the hit.
Code Integration
Google Analytics must be initialized using initialize(gaTrackingID) function before any of the other tracking functions will record any data. Using react-ga ga(‘create’, …), the values are sent to google analytics
A Higher Order Component is a function that returns an enhanced component by adding some more properties or logic and allows reusing component logic.
Using HOC for tracking page helped by not exposing internal component with tracking. The withTracker HOC wraps all the component, which SUSI.AI wants to track and exposes a trackerPage method.
Whenever a component mounts, we track the new pages using componentDidMount.
Whenever the component updates and location of browser url changes, we track the sub pages using componentDidUpdate lifecycle hook.
The Higher-Order Component pattern turned out to be really useful to achieve D.R.Y (Don’t Repeat Yourself) and keeping component separate from tracking. With React Analytics being added, we can track various metrics, live users on site and see how SUSI.AI traffic is performing over time.
You must be logged in to post a comment.