In the previous versions of the firmware for the smart speaker, we had a separate flask server for serving the configuration page and another flask server for serving the control page. This puts forward inconsistencies in the frontend. To make the frontend and the user experience the same across all platforms, the SUSI.AI Web Client is now integrated into the smart speaker.
Now whenever the device is in Access Point mode (Setup mode), and the user accesses the web client running on the smart speaker, the device configuration page is shown.
If the device is not in access point mode, the normal Web Client is shown with a green bar on top saying “You are currently accessing the local version of your SUSI.AI running on your smart device. Configure now” Clicking on the Configure now link redirects the user to the control page, where the user can change the settings for their smart speaker and control media playback.
To integrate both the control web page and the configure web page in the web client we needed to combine the flask server for the control and the configure page. This is done by adding all the endpoints of the configure server to the sound server and then renaming the sound server to control server – Merge Servers
Serving the WebClient through the control server
The next step is to add the Web Client static code to the control server and make it available on the root URL of the server. To do this, first, we have to clone the web client during the installation and add the required symbolic links for the server.
echo "Downloading: Susi.AI webclient" if [ ! -d "susi.ai" ] then git clone --depth 1 -b gh-pages https://github.com/fossasia/susi.ai.git ln -s $PWD/susi.ai/static/* $PWD/susi_installer/raspi/soundserver/static/ ln -s $PWD/susi.ai/index.html $PWD/susi_installer/raspi/soundserver/templates/ else echo "WARNING: susi.ai directory already present, not cloning it!" >&2 fi
The next step is to add the route for the Web Client in the control flask server.
Here the index.html file is of the Web Client which we linked to the templates folder in the previous step.
Connecting the web client to the locally running SUSI Server
The Web Client by default connects to the SUSI Server at https://api.susi.ai but however the smart speaker has its own server running. To connect to the local server we do the following
– Before building the local version from the source of web client we need to set the process.env.REACT_APP_LOCAL_ENV environment variable to true.
– While building the javascript function check() checks if the above-mentioned environment variable is set to true and if yes, sets the API_URL to http://<host_IP_address>:4000
function check() { if (process && process.env.REACT_APP_LOCAL_ENV === 'true') { return 'http://' + window.location.hostname + ':4000'; } return 'https://api.susi.ai'; }
In this blog post we will talk about the software architecture that we have followed in PSLab Desktop. We will discuss in detail about how the three components, ie – Electron, React and Python gel together in our architecture and give the user a smooth, high performance, cross platform experience.
The picture below is all there is to the PSLab Desktop architecture. I have numbered all the main components and will break them down in later section of the article. Once you get this architecture going on in the back of your head while writing the app, you would be able to think clearly and code more efficiently.
An overview of the whole architecture
I am sure there are many other approaches available to build an electron app, but for the stack I used, this architecture turned out to be the only working model that went all the way from development to production and proved its stability after a lot of experimentation.
The PSLab Desktop happens to be a software interface that needs to interact and fetch data from an actual hardware device at a very fast rate and display the results in real time. The stability of this architecture has been tested under various use cases of the PSLab Desktop.
Let’s break it down
There are 4 major parts in the whole architecture, let us talk about them one by one.
1. Main Process
If you read the documentation of electron, it states that the purpose of the main process is to create and manage browser windows. Each browser window can then run its own Web page/GUI. As this process is kind of central to managing the whole app, there can be just one and only one main process throughout an electron app life cycle.
The main process has a secondary yet important task of communicating with the OS to make use of native GUI features like notifications, creating shortcuts, responding to auto-updates and so on.
It also plays a key role in communication, acting as the middle man for all the browser windows it has created. This is in fact the backbone of the whole architecture we are relying on.
2. Renderer Process (Visible)
Every browser window that the main process creates gives rise to a renderer process. For the sake of simplicity, let us call each browser window to be equivalent to a renderer process. The only purpose of a renderer process is to show a webpage. In other words, its task is to render the GUI.
It is also important to note that each browser window that the main process generates runs independently much like tabs on our browser. Even if a tab crashes, the rest of the tabs keep running. This very fundamental feature gives us limitless powers to create stuff.
Q1. Can I make use of native OS features directly from renderer process?
No, you cannot. This is due to security reasons and thus to make use of native OS features, every render process has to communicate with the main process and as it to get the job done on its behalf.
Note: Actually you can access the main process features directly from renderer process by something called theremote module but that is not considered a healthy practice as a careless code may end up leaking resources.
Everything that you need to know about main and renderer process has been mentioned in the electron documentation.
You may have noticed that I have written the word visible separately for the renderer process. Why is that? Because, this is the only renderer that will be visible. As we have mentioned before, we are using React as one of our tech stack elements which is a great tool for writing Single Page Applications. And as you may have guessed, unlike old web apps, Single page Application do not need to open several tabs, rather, it is like a mobile app that gives the user a smooth experience on a single screen/tab. Thus, we will only need one tab to actually render the whole UI for the app. One tab equals one renderer process.
3. Renderer Process (Hidden)
But you just said that we only need one renderer process!!
Well I said only one visible process. Now if you were to make a simple GUI focused app, chances are you’ll never need point number 3 but we are here to push the electron app to new limits.
What happens when you want to perform a thread blocking ( heavy computation ) task? Where would you like to perform it? Main process? Nope, your app will crash if you block the main process. Visible renderer process? Nope, your GUI will start to lag and choke. So where do you do it? Yup, you guessed it, this is where the hidden renderer process comes in.
The hidden process is just another tab/window that is inherently hidden from the user. This is because our goal is not to render a UI in it but to perform a thread blocking task. The idea is to create and destroy hidden windows as and when needed in order to do heavy lifting in the background. Thus to do a thread blocking task, you would simply create a hidden window, finish the task in that hidden window ( running in the background ) without causing any issues with the visible components of your app and when you are done, just destroy the hidden window to claim back your resources. It is that simple.
The best part is you can create as many as you want.
This is the article I read to decide upon the approach I would adopt to tackle CPU intensive background tasks. It would be an interesting read if you want to compare this approach to the other options I had.
4. Python Script
By now you already know that Python is one of the key elements of our teach stack so we have to find a way to make it work effortlessly with the electron code. So the simplest way to do this was making use of an npm module called python-shell. This allows us to execute external python scripts from nodeJS application. As you are allowed to use nodeJS features in an electron app, thus the npm modules will work well in your electron code too.
Q1. From where do you call it?
ans. Obviously if you are using python for performing some task, you are planning to do some heavy lifting. But heavy lifting or not, the ideal spot to run the python script is from hidden renderer processes.
It is important to note that the script is not running inside your electron app, rather running in parallel. This diagram should help you visualize it better.
Script initialization using render process
Now obviously your python script may need some input and will most certainly generate some output when it runs. So as the script runs outside your app, you cannot directly grab or send data to the script. But python-shell has a nice feature which can be used to deal with this situation. We will talk about it in the next section.
The way you would design the script itself is again a topic for discussion which will be covered as we go into the details of coding.
Communications
An important point of discussion is how will each of these four components talk to each other. And to be frank, we just need to take care of three types of communication in our app.
1. Main — Renderer :
The actual means of communication is called the electron IPC (Inter Process Communication). The renderer processes use theipcRenderer while the main process uses theipcMain.
Now if you were to ask how it actually looks like then the best analogy is event listeners. You just set up a bunch of listeners on both ends depending on what you plan to listen for. Later on in the app life cycle, you can trigger these listeners manually to get the desired output.
Placing listeners on Main and render threads for bouncing messages
This idea can be extended to any number of render and main process combination. Under the hood, this is all there is.
2. Visible Renderer— Hidden Renderer :
There is no direct means of communication between two renderer processes. So the hidden renderer process cannot directly talk to the visible renderer process. The only way to actually establish a communication is by bouncing messages off the main process much like a relay system.
Hidden render – Visible render process communication
3. Hidden Renderer — Python Script :
The python-shell library provides us a way to make use of stdin and stdout channels for sending and receiving data from the python script.
So the best way to write your python code would be to assume that you would be using it from the terminal where you directly read data from user input and print data out to the terminal using simple print statement.
The message format that we will be using is JSON (JavaScript Object Notation) although many other options are available.JSON actually makes it easier to structure data which in turn makes it really easy to extract data on both ends with very minimal overhead.
Script-Hidden render communication
Note: Keep in mind that I have mentioned that this is the message format I use; don’t end up relating this to HTTP request-response because the term JSON is used quite heavily in that domain.
This blog post will showcase introduction of new owner role for users in Open Event Frontend. Now a user associated with organizing of an event can have any of the following roles:
Owner
Organizer
Co-organizer
Track-organizer
Moderator
Registrar
Till now, the user creating the event had organizer role which was not exclusive. An organizer can invite other users to be organizer. So, later we couldn’t give exclusive rights to the event creator due to this.
But there can only be a single owner of an event. So, the introduction of new owner role will help us distinguish the owner and give him/her exclusive rights for the event. This refactor involved a lot of changes. Let’s go step by step:
I updated the role of the user creating the event to be owner by default. For this, we query the user and owner role and use it to create a new UserEventRoles object and save it to database. Then we create a role invite object using the user email, role title, event id and role id.
We included a new function is_owner to permission_manager helper which checks if the current_user is owner of the event passed in kwargs. If the user is not the owner, ForbiddenError is returned.
@jwt_required
defis_owner(view, view_args, view_kwargs, *args, **kwargs):
user = current_user
if user.is_staff:
return view(*view_args, **view_kwargs)
ifnot user.is_owner(kwargs['event_id']):
return ForbiddenError({'source': ''}, 'Owner access is required').respond()
return view(*view_args, **view_kwargs)
Updated event schema to add new owner fields and relationship. We updated the fields –
organizer_name -> owner_name
has_organizer_info -> has_owner_info
organizer_description -> owner_description
We also included owner relationship in the EventSchemaPublic
To accommodate the introduction of owner role, we have to introduce a new boolean field is_user_owner and a new relationship owner_events to the UserSchema. The relationship owner_events can be used to fetch lit of events of which a given user is the owner.
Similarly, we need to update Event model too. A new owner relationship is introduced to the event model which is related to User. It basically stores the owner of the event. We then introduce a new function get_owner( ) to the model which iterates through all the roles and return the user if the role is the owner.
In this blog I’ll be explaining about the Devices Tab in SUSI.AI Admin Panel. Admins can now view the connected devices of the users with view, edit and delete actions. Also the admins can directly view the location of the device on the map by clicking on the device location of that user.
Implementation
List Devices
Devices tab displays device name, macId, room, email Id, date added, last active, last login IP and location of the device. loadDevices function is called on componentDidMount which calls the fetchDevices API which fetches the list of devices from /aaa/getDeviceList.json endpoint. List of all devices is stored in devices array. Each device in the array is an object with the above properties. Clicking on the device location opens a popup displaying the device location on the map.
View action redirects to users /mydevices?email<email>&macid=<macid>. This allows admin to have full control of the My devices section of the user. Admin can change device details and delete device. Also admin can see all the devices of the user from the ALL tab. To edit a device click on edit icon in the table, update the details and click on check icon. To delete a device click on the delete device which then asks for confirmation of device name and on confirmation deletes the device.
Edit Device
Edit actions opens up a dialog modal which allows the admin to update the device name and room. Clicking on the edit button calls the modifyUserDevices API which takes in email Id, macId, device name and room name as parameters. This calls the API endpoint /aaa/modifyUserDevices.json.
Delete action opens up a confirm delete dialog modal. To delete a device enter the device name and click on delete. This calls the confirmDelete function which calls the removeUserDevice API which takes in email Id and macId as parameters. This API hits the endpoint /aaa/removeUserDevices.json.
To conclude, admin can now view all the connected SUSI.AI devices along with the user details and location. They can also access users My Devices tab in Dashboard and update and delete devices.
The open event attendee is an android app which allows users to discover events happening around the world using the Open Event Platform. It consumes the APIs of the open event server to get a list of available events and can get detailed information about them.
Shimmer effect was created by Facebook to indicate a loading status, so instead of using ProgressBar or the usual loader use Shimmer for a better design and user interface. They also open-sourced a library called Shimmer both for Android and iOS so that every developer could use it for free.
The ListPrivateSkillService and ListPrivateDraftSkillService endpoint was implemented on SUSI.AI Server for SUSI.AI Admins to view the bots and drafts created by users respectively. This allows admins to monitor the bots and drafts created by users, and delete the ones which violate the guidelines. Also admins can see the sites where the bot is being used.
The endpoint of both ListPrivateSkillService and ListPrivateDraftSkillService is of GET type. Both of them have a compulsory access_token parameter but ListPrivateSkillService has an extra optional search parameter.
access_token(necessary): It is the access_token of the logged in user. It means this endpoint cannot be accessed in anonymous mode.
search: It fetches a bot with the searched name.
The minimum user role is set to OPERATOR.
API Development
ListPrivateSkillService
For creating a list, we need to access each property of botDetailsObject, in the following manner:
Key → Group → Language → Bot Name → BotList
The below code iterates over the uuid of all the users having a bot, then over different groupNames,languageNames, and finally over the botNames. If search parameter is passed then it searches for the bot_name in the language object. Each botDetails object consists of bot name, language, group and key i.e uuid of the user which is then added to the botList array.
JsonTray chatbot = DAO.chatbot;
JSONObject botDetailsObject = chatbot.toJSON();
JSONObject keysObject = new JSONObject();
JSONObject groupObject = new JSONObject();
JSONObject languageObject = new JSONObject();
List botList = new ArrayList();
JSONObject result = new JSONObject();
Iterator Key = botDetailsObject.keys();
List keysList = new ArrayList();
while (Key.hasNext()) {
String key = (String) Key.next();
keysList.add(key);
}
for (String key_name : keysList) {
keysObject = botDetailsObject.getJSONObject(key_name);
Iterator groupNames = keysObject.keys();
List groupnameKeysList = new ArrayList();
while (groupNames.hasNext()) {
String key = (String) groupNames.next();
groupnameKeysList.add(key);
}
for (String group_name : groupnameKeysList) {
groupObject = keysObject.getJSONObject(group_name);
Iterator languageNames = groupObject.keys();
List languagenamesKeysList = new ArrayList();
while (languageNames.hasNext()) {
String key = (String) languageNames.next();
languagenamesKeysList.add(key);
}
for (String language_name : languagenamesKeysList) {
languageObject = groupObject.getJSONObject(language_name);
If search parameter is passed, then search for a bot with the given name and add the bot to the botList if it exists. It will return all bots which have bot name as the searched name.
List of all bots, botList is return as server response.
ListPrivateDraftSkillService
For creating a list we need to iterate over each user and check whether the user has a draft bot. We get all the authorized clients from DAO.getAuthorizedClients(). We then iterate over each client and get their identity and authorization. We get the drafts of the client from DAO.readDrafts(userAuthorization.getIdentity()). We then iterate over each draft and add it to the drafts object. Each draft object consists of date created,date modified, object which contains draft bot information such as name,language,etc provided by the user while saving the draft, email Id and uuid of the user.
JSONObject result = new JSONObject();
List draftBotList = new ArrayList();
Collection authorized = DAO.getAuthorizedClients();
for (Client client : authorized) {
String email = client.toString().substring(6);
JSONObject json = client.toJSON();
ClientIdentity identity = new ClientIdentity(ClientIdentity.Type.email, client.getName());
Authorization userAuthorization = DAO.getAuthorization(identity);
Map map = DAO.readDrafts(userAuthorization.getIdentity());
JSONObject drafts = new JSONObject();
for (Map.Entry entry: map.entrySet()) {
JSONObject val = new JSONObject();
val.put("object", entry.getValue().getObject());
val.put("created", DateParser.iso8601Format.format(entry.getValue().getCreated()));
val.put("modified", DateParser.iso8601Format.format(entry.getValue().getModified()));
drafts.put(entry.getKey(), val);
}
Iterator keys = drafts.keySet().iterator();
while(keys.hasNext()) {
String key = (String)keys.next();
if (drafts.get(key) instanceof JSONObject) {
JSONObject draft = new JSONObject(drafts.get(key).toString());
draft.put("id", key);
draft.put("email", email);
draftBotList.add(draft);
}
}
}
result.put("draftBots", draftBotList);
List of all drafts, draftBotList is returned as server response.
In conclusion, the admins can now see the bots and drafts created by the user and monitor where they are being used.
When the SUSI Smart Speaker is set up for the first time it needs to be configured. After successful configuration, the smart speaker is registered with the associated account so that the user can see their smart speaker device information from the settings of their susi.ai account. There are two ways to configure the smart speaker:
After the configuration setup is done, the Smart Speaker reboots and connects to your WiFi and registers the device with the given account using the login information provided during the setup.
Figure: Device Details are shown in the susi.ai account settings after successful configuration.
Working
The Auth Endpoint
Whenever the speaker is configured via the android app or manually via the web interface it uses various endpoints (access-point-server). For storing login information /auth endpoint is used. The /auth endpoint writes the login details to config.json file in /home/pi/SUSI.AI/config.json
The ss-susi-register service is then enabled i.e. the service will run in the next startup which will register the device online after the device is connected to the WiFi.
This is the service which registers the device on bootup after the configuration phase. The service waits for the network services to run such that the registration script is run only after when it is connected to a network. This service uses register.py to register the device online.
[Unit] Description=Register the smart speaker online Wants=network-online.target After=network-online.target
If the registration fails put back the smart speaker in the access point(configuration) mode and reset the account information in config.json
try: access_token=get_token(user,password) out=device_register(access_token,room) logger.debug(str(out)) break except: if i != 2: time.sleep(5) logger.warning(“Failed to register the device, retrying.”) else: logger.warning(“Resetting the device to hotspot mode”) config[‘usage_mode’]=”anonymous” config[‘login_credentials’][’email’]=”” config[‘login_credentials’][‘password’]=”” subprocess.Popen([‘sudo’,’bash’, ‘susi_installer/raspi/access_point/wap.sh’])
Disable the systemd service The script should run only once i.e. only after the configuration process, so the ss-susi-register.service needs to be disabled.
This blog post will showcase the introduction of new Initializing status for orders in Open Event Frontend. So, now we have a total of six status. Let’s take a closer look and understand what exactly these order status means:
Status
Description
Color Code
Initializing
When a user selects tickets and clicks on Order Now button on public event page, the user will get 15 minutes to fill up the order form. The status for order till the form is submitted is – initializing
Yellow
Placed
If only offline paid tickets are present in order i.e. paymentMode belongs to one of the following – bank, cheque, onsite; then the status of order is placed
Blue
Pending
If the order contains online paid tickets, the status for such order is pending. User gets 30 minutes to complete payment for such pending orders. If user completes the payment in this timespan of 30 minutes, the status of order is updated to completed.However if user fails to complete payment in 30 minutes, the status of the order is updated to expired.
Orange
Completed
There are two cases when the status of order is completed – 1. If the ordered tickets are free tickets, the status of order is completed. 2. If the online payment for pending tickets is completed in timespan of 30 minutes, the status is updated to completed.
Green
Expired
There are two cases when status of order is updated to expired. 1. If the user fails to fill up the order form in the 15 minutes allotted to the user, the status changes from initializing to expired. 2. If the user fails to complete the payment for online paid orders in timeframe of 30 minutes allotted, the status is updated from pending to expired.
Red
Cancelled
When an organizer cancels an order, the order is given status of cancelled.
Grey
Placed OrderCompleted OrderPending OrderExpired Order
So, basically the status of code is set based on the value of paymentMode attribute.
If the paymentMode is free, the status is set to completed. If the paymentMode is bank or cheque or onsite, the status is set to placed. Otherwise, the status is set to pending.
We render the status of order at many places in the frontend, so we introduced a new helper order-color which returns the color code depending on the status of the order.
import { helper } from '@ember/component/helper';
exportfunctionorderColor(params) {
switch (params[0]) {
case 'completed':
return 'green';
case 'placed':
return 'blue';
case 'initializing':
return 'yellow';
case 'pending':
return 'orange';
case 'expired':
return 'red';
default:
return 'grey';
}
}
exportdefault helper(orderColor);
This refactor was followed up on server also to accommodate changes:
Ensuring that the default status is always initializing. For this, we place a condition in before_post hook to mark the status as initializing.
Till now, the email and notification were sent out only for completed orders but as we now use placed status for offline paid orders so we send out email and notification for placed orders too. For this, I updated the condition in after_create_object hook
classOrdersListPost(ResourceList):
...
defbefore_post(self, args, kwargs, data=None):
...
ifnot has_access('is_coorganizer', event_id=data['event']):
data['status'] = 'initializing'
defafter_create_object(self, order, data, view_kwargs):
...
# send e-mail and notifications if the order status is completed
if order.status == 'completed' or order.status == 'placed':
# fetch tickets attachment
order_identifier = order.identifier
...
To ensure that orders with status as initializing and pending are updatable only, we introduced a check in before_update_object hook.
classOrderDetail(ResourceDetail):
...
defbefore_update_object(self, order, data, view_kwargs):
...
elif current_user.id == order.user_id:
if order.status != 'initializing' and order.status != 'pending':
raise ForbiddenException({'pointer': ''}, "You cannot update a non-initialized or non-pending order")
To allow a new status initializing for the orders, we needed to include it as a valid choice for status in order schema.
Open Event Organizer Android App consists of various features which can be used by event organizers to manage their events. Also, they can invite other people for various roles. After acceptance of the role invite, the particular user would have access to features like the event settings and functionalities like scanning of tickets and editing of event details, depending on the access level of the role.
There can be various roles which can be assigned to a user: Organizer, Co-Organizer, Track Organizer, Moderator, Attendee, Registrar.
Here we will go through the process of implementing the feature to invite a person for a particular role for an event using that person’s email address.
The ‘Add Role’ screen has an email field to enter the invitee’s email address and select the desired role for the person. Upon clicking the ‘Send Invite’ button, the person would be sent a mail containing a link to accept the role invite.
The Role class is used for the different types of available roles.
@Data@Builder@Type("role")@AllArgsConstructor@NoArgsConstructor@JsonNaming(PropertyNamingStrategy.KebabCaseStrategy.class)publicclass Role {@Id(LongIdHandler.class)publicLong id;publicString name;publicString titleName;}
Pagination (Endless Scrolling or Infinite Scrolling) breaks down a list of content into smaller parts, loaded one at a time. It is important when the quantity of data to be loaded is huge and loading all the data at once can result in timeout.
It is an Android app used by event organizers to create and manage events on the Eventyay platform. Features include event creation, ticket management, attendee list with ticket details, scanning of participants etc.
In the Open Event Organizer App, the loading of attendees would result in timeout when the number of attendees would be large. The solution for fixing this was the implementation of pagination in the Attendees fragment.
First, the API call needs to be modified to include the page size as well as the addition of page number as a Query.
@GET("events/{id}/attendees?include=order,ticket,event&fields[event]=id&fields[ticket]=id&page[size]=20")
Observable<List<Attendee>> getAttendeesPageWise(@Path("id") long id, @Query("page[number]") long pageNumber);
Now, we need to modify the logic of fetching the list of attendees to include the page number. Whenever one page ends, the next page should be fetched automatically and added to the list.
The page number needs to be passed as an argument in the loadAttendeesPageWise() method in AttendeesViewModel.
Now, in the AttendeesFragment, a check is needed to increase the current page number and load attendees for the next page when the user reaches the end of the list.
if (!recyclerView.canScrollVertically(1)) {if(recyclerView.getAdapter().getItemCount()> currentPage * ITEMS_PER_PAGE){
currentPage++;}else{
currentPage++;
attendeesViewModel.loadAttendeesPageWise(currentPage,true);}}
When a new page is fetched, we need to update the existing list and add the elements from the new page.