Open Event Organizer Android App is used by event organizers to manage events on the Eventyay platform. While creating or updating an event, location is one of the important factors which needs to be added so that the attendees can be informed of the venue.
Here, we’ll go through the process of implementing Mapbox Places Autocomplete for event location in the F-Droid build variant.
The first step is to create an environment variable for the Mapbox Access Token.
The app should not crash if the access token is not available. To ensure this, we need to put a check. Since, the default value of the access token is set to “YOUR_ACCESS_TOKEN”, the following code will check whether a token is available or not:
if (mapboxAccessToken.equals("YOUR_ACCESS_TOKEN")) {
ViewUtils.showSnackbar(binding.getRoot(), R.string.access_token_required);return;}
Now, a listener needs to be set up to get the selected place and set the various fields like latitude, longitude, location name and searchable location name.
Android App Links are HTTP URLs that bring users directly to specific content in an Android app. They allow the website URLs to immediately open the corresponding content in the related Android app.
Whenever such a URL is clicked, a dialog is opened allowing the user to select a particular app which can handle the given URL.
In this blog post, we will be discussing the implementation of Android App Links for password reset in Open Event Organizer App, the Android app developed for event organizers using the Eventyay platform.
What is the purpose of using App Links?
App Links are used to open the corresponding app when a link is clicked.
If the app is installed, then it will open on clicking the link.
If app is not installed, then the link will open in the browser.
The first steps involve:
Creating intent filters in the manifest.
Adding code to the app’s activities to handle incoming links.
Associating the app and the website with Digital Asset Links.
Adding Android App Links
First step is to add an intent-filter for the AuthActivity.
The Play Store build variant of the app uses Google Vision API for scanning attendees. This cannot be used in the F-Droid build variant since F-Droid requires all the libraries used in the project to be open source. Thus, we’ll be using this library: https://github.com/blikoon/QRCodeScanner
We’ll start by creating separate ScanQRActivity, ScanQRView and activity_scan_qr.xml files for the F-Droid variant. We’ll be using a common ViewModel for the F-Droid and Play Store build variants.
Let’s start with requesting the user for camera permission so that the mobile camera can be used for scanning QR codes.
public void onCameraLoaded() {if(hasCameraPermission()){
startScan();}else{
requestCameraPermission();}}@Overridepublic void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {if(requestCode != PERM_REQ_CODE)return;// If request is cancelled, the result arrays are empty.if(grantResults.length >0&& grantResults[0]== PackageManager.PERMISSION_GRANTED){
cameraPermissionGranted(true);}else{
cameraPermissionGranted(false);}}@Overridepublic boolean hasCameraPermission() {return ContextCompat.checkSelfPermission(this, permission.CAMERA)== PackageManager.PERMISSION_GRANTED;}@Overridepublic void requestCameraPermission() {
ActivityCompat.requestPermissions(this,newString[]{Manifest.permission.CAMERA}, PERM_REQ_CODE);}@Overridepublic void showPermissionError(String error) {
Toast.makeText(this, error, Toast.LENGTH_SHORT).show();}public void cameraPermissionGranted(boolean granted) {if(granted){
startScan();}else{
showProgress(false);
showPermissionError("User denied permission");}}
After the camera permission is granted, or if the camera permission is already granted, then the startScan() method would be called.
Handling Planned Actions for the SUSI Smart Speaker
Planned action is the latest feature added to the SUSI Smart Speaker, The user now has the option to set timed actions such as- settings alarms, countdown timer etc. So the user now can say “SUSI, Set an alarm in one minute” and after one minute the smart speaker will notify you.
The following flowchart represents the workflow for the working of a planned action:
Planned Action Response
The SUSI Server accepts the planned action query and sends a multi-action response which looks like this:
“actions”: [ { “language”: “en”, “type”: “answer”, “expression”: “alarm set for in 1 minute” }, { “expression”: “ALARM”, “language”: “en”, “type”: “answer”, “plan_delay”: 60003, “plan_date”: “2019-08-19T22:28:44.283Z” } ]
Here we can see that we have two actions in the server response. The first action is of the type “answer” and is executed by the SUSI Linux client immediately, the other response has the `plan_date` and `plan_delay` keys which tells the BUSY State of the SUSI Linux client that this is a planned action and is then sent to the scheduler.
Parsing Planned Action Response From The Server
The SUSI python wrapper is responsible for parsing the response from the server and making it understandable to the SUSI Linux client. In SUSI Python we have classes which represent the different types of actions possible. The SUSI Python takes all the actions sent by the server and parses them into objects of different action types. To enable handling planned actions we add two more attributes to the base action class – `planned_date` and `planned_delay`.
Here we can see, All the action types which can be planned actions call the base class’ constructor to set the value for planned_delay and planned_date attributes.
The next step is to parse the different action type object and generate the final result
def generate_result(response): result = dict() actions = response.answer.actions data = response.answer.data result[“planned_actions”] = [] for action in actions: data = dict() if isinstance(action, AnswerAction): if action.plan_delay != None and action.plan_date != None: data[‘answer’] = action.expression data[‘plan_delay’] = action.plan_delay data[‘plan_date’] = action.plan_date else: result[‘answer’] = action.expression if data != {}: result[“planned_actions”].append(data)
Here if the action object has a non none value for the planned attributes, the action object’s values are added to a planned actions list.
Listening to Planned Actions in the SUSI Linux Client
In the busy state, we see if the payload coming from the IDLE state is a query or a planned response coming from the scheduler. If the payload is a query, the query is sent to the server, otherwise the payload is executed directly
If the payload was a query and the server replies with a planned action response, then
The server response is sent to the scheduler.
if ‘planned_actions’ in reply.keys(): for plan in reply[‘planned_actions’]: self.components.action_schduler.add_event(int(plan[‘plan_delay’])/1000,plan)
The scheduler then schedules the event and send the payload to the IDLE state with the required delay. To trigger planned actions we implemented an event based observer using RxPy. The listener resides in the idle state of the SUSI State Machine.
if self.components.action_schduler is not None: self.components.action_schduler.subject.subscribe( on_next=lambda x: self.transition_busy(x))
The observer in the IDLE state on receiving an event sends the payload to the busy state where it is processed. This is done by the transition_busy method which uses the allowedStateTransitions method.
In this blog, I’ll explain the different API’s involved in the SUSI.AI Bot Builder and its working. Now if you are wondering how the SUSI.AI bot builder works or how drafts are saved, then this is the perfect blog. I’ll be explaining the different API’s grouped by different API endpoints.
This API is used to fetch all your saved chatbots, which are displayed on the BotBuilder Page. The API endpoint is getSkillList.json. Same endpoint is used when a user creates a skill, the difference is query parameter private is passed which then returns your chatbots. Now if you are wondering why we have same endpoint for skills and chatbots, the simple plain reason for this is chatbots are your private skills.
This API is used to fetch details of bot/skill respectively from the API endpoint getSkill.json. Group name, language, skill name, private and model are passed as query parameters.
This API is used to fetch skill and bot images from the API endpoint getSkill.json. Group name, language, skill name and private are passed as query parameters.
This API is used to upload the Bot image to the API endpoint uploadImage.json.The Content-Type entity header is used to indicate the media type of the resource. multipart/form-datameans no characters will be encoded. This is used when a form requires a binary data like the contents of a file or image to be uploaded.
This API is used to store draft Bot to the API endpoint storeDraft.json. The object passed as parameter has the properties given by the user such as skill name,group etc., while saving the draft.
This API is used to fetch draft from the API endpoint readDraft.json. This API is called on the BotBuilder Page where all the saved drafts are shown.
deleteDraft
export function deleteDraft(payload) {
const { id } = payload;
const url = `${API_URL}/${CMS_API_PREFIX}/deleteDraft.json`;
return ajax.get(url, { id });
}
This API is used to delete the saved Draft from the API endpoint deleteDraft.json. It only needs one query parameter i.e. the draft ID.
In conclusion, the above API’s are the backbone of the SUSI.AI Bot Builder. API endpoints in server ensure the user has the same experience across the clients. Do checkout implementation of different API endpoints in server here.
In open event attendee android app earlier only access token is used, which causes HTTP 401 many time due to expiry of the token. It needs to re sign-in for the user. Now we have implemented refresh token authorization with the open event server using retrofit and OkHttp. Retrofit is one of the most popular HTTP client for Android. When calling API, we may require authentication using a token. Usually, the token is expired after a certain amount of time and needs to be refreshed using the refresh token. The client would need to send an additional HTTP request in order to get the new token. Imagine you have a collection of many different APIs, each of them requires token authentication. If you have to handle refresh token by modifying your code one by one, it will take a lot of time and of course, it’s not a good solution. In this blog, I’m going to show you how to handle refresh token on each API calls automatically if the token expires.
How refresh token works?
Add authenticator to OkHttp
Network call and handle response
Conclusion
Resources
Let’s analyze every step in detail.
How Refresh Token Works?
Whether tokens are opaque or not is usually defined by the implementation. Common implementations allow for direct authorization checks against an access token. That is, when an access token is passed to a server managing a resource, the server can read the information contained in the token and decide itself whether the user is authorized or not (no checks against an authorization server are needed). This is one of the reasons tokens must be signed (using JWS, for instance). On the other hand, refresh tokens usually require a check against the authorization server.
Add Authenticator to OkHTTP
OkHttp will automatically ask the Authenticator for credentials when a response is 401 Not Authorised retrying last failed request with them.
classTokenAuthenticator: Authenticator {overridefunauthenticate(route: Route?, response: Response): Request? {
// Refresh your access_token using a synchronous api requestval newAccessToken = service.refreshToken();
// Add new header to rejected request and retry itreturn response.request().newBuilder()
.header(AUTHORIZATION, newAccessToken)
.build()
}
}
Add the authenticatior to OkHttp:
val builder = OkHttpClient().newBuilder()
.authenticator(TokenAuthenticator())
@JsonNaming(PropertyNamingStrategy.SnakeCaseStrategy::class)dataclassRefreshResponse(
val refreshToken: String
)
In a Nutshell
Refresh tokens improve security and allow for reduced latency and better access patterns to authorization servers. Implementations can be simple using tools such as JWT + JWS. If you are interested in learning more about tokens (and cookies), check our article here.
The implementation of event invoice tables is already explained in the blog post Implementation of Event Invoice view using Ember Tables. This blog post is an extension which will showcase the implementation of paid route for event invoice in Open Event Frontend. Event invoices can be explained as the monthly fee given by the organizer to the platform for hosting their event.
We begin by defining a route for displaying the paid event invoice. We chose event-invoice/paid route in router.js
Now, we need to finalize the paid route. The route is divided in three sections –
titleToken: To display title in the tab of the browser.
model: To fetch event-invoice with the given identifier.
afterModel: To check if the status of the fetched event -invoice is paid or due. If it is due, it is redirected to review page otherwise it is redirected to paid page.
The invoice summary contains some of the details of the event invoice. It contains the event name, date of issue, date of completion and amount payable of the event invoice. The snippet for invoice summary can be viewed here
2. Event Info:
<div class="mobile hidden six wide column">
{{event-invoice/event-info event=model.event}}
</div>
The event info section of the paid page contains the description of the event to which the invoice is associated. It contains event location, start date and end date of the event. The snippet for event info can be viewed here
The billing info section of the event invoice paid page contains the billing info of the user to which the event is associated and that of the admin. The billing info includes name, email, phone, zip code and country of the user and the admin. The snippet for billing info can be viewed here
The payee information displays the name and email of the user who pays for the invoice and also the method of the payment along with relevant information. The snippet for payee info can be viewed here
We can download the invoice of the payment made for the event invoice. This is triggered when the Print Invoice button is clicked.
Code snippet to trigger the download of the invoice –
The function downloadEventInvoice is called when the button Print Invoice is clicked. The event name and order ID is passed as parameters to the function. When the invoice pdf generation is successful, message Here is your Event Invoice is displayed on the screen whereas if there is an error, the message Unexpected error occurred is displayed.
SUSI.AI now has a chat bubble on the bottom right of every page to assist you. Chat Bubble allows you to connect with susi.ai on just a click. You can now directly play test examples of skill on chatbot. It can also be viewed on full screen mode.
Redux Code
Redux Action associated with chat bubble is handleChatBubble. Redux state chatBubble can have 3 states:
minimised – chat bubble is not visible. This set state is set when the chat is viewed in full screen mode.
bubble – only the chat bubble is visible. This state is set when close icon is clicked and on toggle.
full – the chat bubble along with message section is visible. This state is set when minimize icon is clicked on full screen mode and on toggle.
The user can click on the speech box for skill example and immediately view the answer for the skill on the chatbot. When a speech bubble is clicked a query parameter testExample is added to the URL. The value of this query parameter is resolved and answered by Message Composer. To be able to generate an answer bubble again and again for the same query, we have a reducer state testSkillExampleKey which is updated when the user clicks on the speech box. This is passed as key parameter to messageSection.
Chat Bubble Code
The functions involved in the working of chatBubble code are:
openFullScreen – This function is called when the full screen icon is clicked in the tab and laptop view and also when chat bubble is clicked in mobile view. This opens up a full screen dialog with message section. It dispatches handleChatBubble action which sets the chatBubble reducer state as minimised.
closeFullScreen – This function is called when the exit full screen icon is clicked. It dispatches a handleChatBubble action which sets the chatBubble reducer state as full.
toggleChat – This function is called when the user clicks on the chat bubble. It dispatches handleChatBubble action which toggles the chatBubble reducer state between full and bubble.
handleClose – This function is called when the user clicks on the close icon. It dispatches handleChatBubble action which sets the chatBubble reducer state to bubble.
The message section comprises of three parts the actionBar, messageSection and the message Composer.
Action Bar
The actionBar consists of the action buttons – search, full screen, exit full screen and close button. Clicking on the search button expands and opens up a search bar. On clicking the full screen icon openFullScreen function is called which open up the chat dialog. On clicking the exit icon the handleClose function is called, which set chatBubble reducer state to bubble. On full screen view, clicking on the exit full screen icon calls the closeFullScreen functions which sets the reducer state chatBubble to full.
The message section has two parts MessageList and Message Composer. Message List is where the messages are viewed and the Message composer allows you to interact with the bot through text and speech. ScrollBar is imported from the npm library react-custom-scrollbars. When the scroll bar is moved it sets the state of showScrollTop and showScrollBottom in the chat. messageListItems consists of all the messages between the user and the bot.
In the previous versions of the firmware for the smart speaker, we had a separate flask server for serving the configuration page and another flask server for serving the control page. This puts forward inconsistencies in the frontend. To make the frontend and the user experience the same across all platforms, the SUSI.AI Web Client is now integrated into the smart speaker.
Now whenever the device is in Access Point mode (Setup mode), and the user accesses the web client running on the smart speaker, the device configuration page is shown.
If the device is not in access point mode, the normal Web Client is shown with a green bar on top saying “You are currently accessing the local version of your SUSI.AI running on your smart device. Configure now” Clicking on the Configure now link redirects the user to the control page, where the user can change the settings for their smart speaker and control media playback.
To integrate both the control web page and the configure web page in the web client we needed to combine the flask server for the control and the configure page. This is done by adding all the endpoints of the configure server to the sound server and then renaming the sound server to control server – Merge Servers
Serving the WebClient through the control server
The next step is to add the Web Client static code to the control server and make it available on the root URL of the server. To do this, first, we have to clone the web client during the installation and add the required symbolic links for the server.
echo "Downloading: Susi.AI webclient" if [ ! -d "susi.ai" ] then git clone --depth 1 -b gh-pages https://github.com/fossasia/susi.ai.git ln -s $PWD/susi.ai/static/* $PWD/susi_installer/raspi/soundserver/static/ ln -s $PWD/susi.ai/index.html $PWD/susi_installer/raspi/soundserver/templates/ else echo "WARNING: susi.ai directory already present, not cloning it!" >&2 fi
The next step is to add the route for the Web Client in the control flask server.
Here the index.html file is of the Web Client which we linked to the templates folder in the previous step.
Connecting the web client to the locally running SUSI Server
The Web Client by default connects to the SUSI Server at https://api.susi.ai but however the smart speaker has its own server running. To connect to the local server we do the following
– Before building the local version from the source of web client we need to set the process.env.REACT_APP_LOCAL_ENV environment variable to true.
– While building the javascript function check() checks if the above-mentioned environment variable is set to true and if yes, sets the API_URL to http://<host_IP_address>:4000
function check() { if (process && process.env.REACT_APP_LOCAL_ENV === 'true') { return 'http://' + window.location.hostname + ':4000'; } return 'https://api.susi.ai'; }
In this blog post we will talk about the software architecture that we have followed in PSLab Desktop. We will discuss in detail about how the three components, ie – Electron, React and Python gel together in our architecture and give the user a smooth, high performance, cross platform experience.
The picture below is all there is to the PSLab Desktop architecture. I have numbered all the main components and will break them down in later section of the article. Once you get this architecture going on in the back of your head while writing the app, you would be able to think clearly and code more efficiently.
An overview of the whole architecture
I am sure there are many other approaches available to build an electron app, but for the stack I used, this architecture turned out to be the only working model that went all the way from development to production and proved its stability after a lot of experimentation.
The PSLab Desktop happens to be a software interface that needs to interact and fetch data from an actual hardware device at a very fast rate and display the results in real time. The stability of this architecture has been tested under various use cases of the PSLab Desktop.
Let’s break it down
There are 4 major parts in the whole architecture, let us talk about them one by one.
1. Main Process
If you read the documentation of electron, it states that the purpose of the main process is to create and manage browser windows. Each browser window can then run its own Web page/GUI. As this process is kind of central to managing the whole app, there can be just one and only one main process throughout an electron app life cycle.
The main process has a secondary yet important task of communicating with the OS to make use of native GUI features like notifications, creating shortcuts, responding to auto-updates and so on.
It also plays a key role in communication, acting as the middle man for all the browser windows it has created. This is in fact the backbone of the whole architecture we are relying on.
2. Renderer Process (Visible)
Every browser window that the main process creates gives rise to a renderer process. For the sake of simplicity, let us call each browser window to be equivalent to a renderer process. The only purpose of a renderer process is to show a webpage. In other words, its task is to render the GUI.
It is also important to note that each browser window that the main process generates runs independently much like tabs on our browser. Even if a tab crashes, the rest of the tabs keep running. This very fundamental feature gives us limitless powers to create stuff.
Q1. Can I make use of native OS features directly from renderer process?
No, you cannot. This is due to security reasons and thus to make use of native OS features, every render process has to communicate with the main process and as it to get the job done on its behalf.
Note: Actually you can access the main process features directly from renderer process by something called theremote module but that is not considered a healthy practice as a careless code may end up leaking resources.
Everything that you need to know about main and renderer process has been mentioned in the electron documentation.
You may have noticed that I have written the word visible separately for the renderer process. Why is that? Because, this is the only renderer that will be visible. As we have mentioned before, we are using React as one of our tech stack elements which is a great tool for writing Single Page Applications. And as you may have guessed, unlike old web apps, Single page Application do not need to open several tabs, rather, it is like a mobile app that gives the user a smooth experience on a single screen/tab. Thus, we will only need one tab to actually render the whole UI for the app. One tab equals one renderer process.
3. Renderer Process (Hidden)
But you just said that we only need one renderer process!!
Well I said only one visible process. Now if you were to make a simple GUI focused app, chances are you’ll never need point number 3 but we are here to push the electron app to new limits.
What happens when you want to perform a thread blocking ( heavy computation ) task? Where would you like to perform it? Main process? Nope, your app will crash if you block the main process. Visible renderer process? Nope, your GUI will start to lag and choke. So where do you do it? Yup, you guessed it, this is where the hidden renderer process comes in.
The hidden process is just another tab/window that is inherently hidden from the user. This is because our goal is not to render a UI in it but to perform a thread blocking task. The idea is to create and destroy hidden windows as and when needed in order to do heavy lifting in the background. Thus to do a thread blocking task, you would simply create a hidden window, finish the task in that hidden window ( running in the background ) without causing any issues with the visible components of your app and when you are done, just destroy the hidden window to claim back your resources. It is that simple.
The best part is you can create as many as you want.
This is the article I read to decide upon the approach I would adopt to tackle CPU intensive background tasks. It would be an interesting read if you want to compare this approach to the other options I had.
4. Python Script
By now you already know that Python is one of the key elements of our teach stack so we have to find a way to make it work effortlessly with the electron code. So the simplest way to do this was making use of an npm module called python-shell. This allows us to execute external python scripts from nodeJS application. As you are allowed to use nodeJS features in an electron app, thus the npm modules will work well in your electron code too.
Q1. From where do you call it?
ans. Obviously if you are using python for performing some task, you are planning to do some heavy lifting. But heavy lifting or not, the ideal spot to run the python script is from hidden renderer processes.
It is important to note that the script is not running inside your electron app, rather running in parallel. This diagram should help you visualize it better.
Script initialization using render process
Now obviously your python script may need some input and will most certainly generate some output when it runs. So as the script runs outside your app, you cannot directly grab or send data to the script. But python-shell has a nice feature which can be used to deal with this situation. We will talk about it in the next section.
The way you would design the script itself is again a topic for discussion which will be covered as we go into the details of coding.
Communications
An important point of discussion is how will each of these four components talk to each other. And to be frank, we just need to take care of three types of communication in our app.
1. Main — Renderer :
The actual means of communication is called the electron IPC (Inter Process Communication). The renderer processes use theipcRenderer while the main process uses theipcMain.
Now if you were to ask how it actually looks like then the best analogy is event listeners. You just set up a bunch of listeners on both ends depending on what you plan to listen for. Later on in the app life cycle, you can trigger these listeners manually to get the desired output.
Placing listeners on Main and render threads for bouncing messages
This idea can be extended to any number of render and main process combination. Under the hood, this is all there is.
2. Visible Renderer— Hidden Renderer :
There is no direct means of communication between two renderer processes. So the hidden renderer process cannot directly talk to the visible renderer process. The only way to actually establish a communication is by bouncing messages off the main process much like a relay system.
Hidden render – Visible render process communication
3. Hidden Renderer — Python Script :
The python-shell library provides us a way to make use of stdin and stdout channels for sending and receiving data from the python script.
So the best way to write your python code would be to assume that you would be using it from the terminal where you directly read data from user input and print data out to the terminal using simple print statement.
The message format that we will be using is JSON (JavaScript Object Notation) although many other options are available.JSON actually makes it easier to structure data which in turn makes it really easy to extract data on both ends with very minimal overhead.
Script-Hidden render communication
Note: Keep in mind that I have mentioned that this is the message format I use; don’t end up relating this to HTTP request-response because the term JSON is used quite heavily in that domain.