Implementation of scanning in F-Droid build variant of Open Event Organizer Android App

Open Event Organizer App (Eventyay Organizer App) is the Android app used by event organizers to create and manage events on the Eventyay platform.

Various features include:

  1. Event creation.
  2. Ticket management.
  3. Attendee list with ticket details.
  4. Scanning of participants etc.

The Play Store build variant of the app uses Google Vision API for scanning attendees. This cannot be used in the F-Droid build variant since F-Droid requires all the libraries used in the project to be open source. Thus, we’ll be using this library: https://github.com/blikoon/QRCodeScanner 

We’ll start by creating separate ScanQRActivity, ScanQRView and activity_scan_qr.xml files for the F-Droid variant. We’ll be using a common ViewModel for the F-Droid and Play Store build variants.

Let’s start with requesting the user for camera permission so that the mobile camera can be used for scanning QR codes.

public void onCameraLoaded() {
    if (hasCameraPermission()) {
        startScan();
    } else {
        requestCameraPermission();
    }
}
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
    if (requestCode != PERM_REQ_CODE)
            return;

    // If request is cancelled, the result arrays are empty.
    if (grantResults.length > 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {
        cameraPermissionGranted(true);
    } else {
        cameraPermissionGranted(false);
    }
}



@Override
public boolean hasCameraPermission() {
    return ContextCompat.checkSelfPermission(this, permission.CAMERA) == PackageManager.PERMISSION_GRANTED;
}

@Override
public void requestCameraPermission() {
    ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.CAMERA}, PERM_REQ_CODE);
}


@Override
public void showPermissionError(String error) {
    Toast.makeText(this, error, Toast.LENGTH_SHORT).show();
}

public void cameraPermissionGranted(boolean granted) {
    if (granted) {
        startScan();
    } else {
        showProgress(false);
        showPermissionError("User denied permission");
    }
}

After the camera permission is granted, or if the camera permission is already granted, then the startScan() method would be called.

@Override
public void startScan() {
    Intent i = new Intent(ScanQRActivity.this, QrCodeActivity.class);
    startActivityForResult(i, REQUEST_CODE_QR_SCAN);
}

QrCodeActivity belongs to the library that we are using.

Now, the processing of barcode would be started after it is scanned. The processBarcode() method in ScanQRViewModel would be called.

public void onActivityResult(int requestCode, int resultCode, Intent intent) {

    if (requestCode == REQUEST_CODE_QR_SCAN) {
        if (intent == null)
            return;

        scanQRViewModel.processBarcode(intent.getStringExtra
            ("com.blikoon.qrcodescanner.got_qr_scan_relult"));

    } else {
        super.onActivityResult(requestCode, resultCode, intent);
    }
}

Let’s move on to the processBarcode() method, which is the same as the Play Store variant.

public void processBarcode(String barcode) {

    Observable.fromIterable(attendees)
        .filter(attendee -> attendee.getOrder() != null)
        .filter(attendee -> (attendee.getOrder().getIdentifier() + "-" + attendee.getId()).equals(barcode))
        .compose(schedule())
        .toList()
        .subscribe(attendees -> {
            if (attendees.size() == 0) {
                message.setValue(R.string.invalid_ticket);
                tint.setValue(false);
            } else {
                checkAttendee(attendees.get(0));
            }
        });
}

The checkAttendee() method:

private void checkAttendee(Attendee attendee) {
    onScannedAttendeeLiveData.setValue(attendee);

    if (toValidate) {
        message.setValue(R.string.ticket_is_valid);
        tint.setValue(true);
        return;
    }

    boolean needsToggle = !(toCheckIn && attendee.isCheckedIn ||
        toCheckOut && !attendee.isCheckedIn);

    attendee.setChecking(true);
    showBarcodePanelLiveData.setValue(true);

    if (toCheckIn) {
        message.setValue(
            attendee.isCheckedIn ? R.string.already_checked_in : R.string.now_checked_in);
        tint.setValue(true);
        attendee.isCheckedIn = true;
    } else if (toCheckOut) {
        message.setValue(
            attendee.isCheckedIn ? R.string.now_checked_out : R.string.already_checked_out);
        tint.setValue(true);
        attendee.isCheckedIn = false;
    }

    if (needsToggle)
        compositeDisposable.add(
            attendeeRepository.scheduleToggle(attendee)
                .subscribe(() -> {
                    // Nothing to do
                }, Logger::logError));
}

This would toggle the check-in state of the attendee.

Resources:

Library used: QRCodeScanner

Pull Request: feat: Implement scanning in F-Droid build variant

Open Event Organizer App: Project repo, Play Store, F-Droid

Continue ReadingImplementation of scanning in F-Droid build variant of Open Event Organizer Android App

Handling Planned Actions for the SUSI Smart Speaker

Handling Planned Actions for the SUSI Smart Speaker 

Planned action is the latest feature added to the SUSI Smart Speaker, The user now has the option to set timed actions such as-  settings alarms, countdown timer etc. So the user now can say “SUSI, Set an alarm in one minute” and after one minute the smart speaker will notify you.

The following flowchart represents the workflow for the working of a planned action:

Planned Action Response

The SUSI Server accepts the planned action query and sends a multi-action response which looks like this: 

“actions”: [
      {
        “language”: “en”,
        “type”: “answer”,
        “expression”: “alarm set for in 1 minute”
      },
      {
        “expression”: “ALARM”,
        “language”: “en”,
        “type”: “answer”,
        “plan_delay”: 60003,
        “plan_date”: “2019-08-19T22:28:44.283Z”
      }
    ]


Here we can see that we have two actions in the server response. The first action is of the type “answer” and is executed by the SUSI Linux client immediately, the other response has the `plan_date` and `plan_delay` keys which tells the BUSY State of the SUSI Linux client that this is a planned action and is then sent to the scheduler.

Parsing Planned Action Response From The Server 

The SUSI python wrapper is responsible for parsing the response from the server and making it understandable to the SUSI Linux client. In SUSI Python we have classes which represent the different types of actions possible. The SUSI Python takes all the actions sent by the server and parses them into objects of different action types. To enable handling planned actions we add two more attributes to the base action class – `planned_date` and `planned_delay`.


class BaseAction:
    def __init__(self, plan_delay = None, plan_date = None):
        self.plan_delay = plan_delay
        self.plan_date = plan_date

class AnswerAction(BaseAction):
    def __init__(self, expression, plan_delay = None, plan_date = None):
        super().__init__(plan_delay,plan_date)
        self.expression = expression


Here we can see, All the action types which can be planned actions call the base class’ constructor to set the value for planned_delay and planned_date attributes. 

The next step is to parse the different action type object and generate the final result

def generate_result(response):
    result = dict()
    actions = response.answer.actions
    data = response.answer.data
    result[“planned_actions”] = []
    for action in actions:
        data = dict()
        if isinstance(action, AnswerAction):
            if action.plan_delay != None and action.plan_date != None:
                data[‘answer’] = action.expression
                data[‘plan_delay’] = action.plan_delay
                data[‘plan_date’] = action.plan_date
            else:
                result[‘answer’] = action.expression
        if data != {}:
            result[“planned_actions”].append(data)

Here if the action object has a non none value for the planned attributes, the action object’s values are added to a planned actions list.

Listening to Planned Actions in the SUSI Linux Client

In the busy state, we see if the payload coming from the IDLE state is a query or a planned response coming from the scheduler. If the payload is a query, the query is sent to the server, otherwise the payload is executed directly

if isinstance(payload, str):
logger.debug(“Sending payload to susi server: %s”, payload)
reply = self.components.susi.ask(payload)
else :
logger.debug(“Executing planned action response”, payload)
reply = payload


If the payload was a query and the server replies with a planned action response, then 

The server response is sent to the scheduler.

if ‘planned_actions’ in reply.keys():
    for plan in reply[‘planned_actions’]:        self.components.action_schduler.add_event(int(plan[‘plan_delay’])/1000,plan)


The scheduler then schedules the event and send the payload to the IDLE state with the required delay. To trigger planned actions we implemented an event based observer using RxPy. The listener resides in the idle state of the SUSI State Machine. 

        if self.components.action_schduler is not None:
            self.components.action_schduler.subject.subscribe(
                on_next=lambda x: self.transition_busy(x))


The observer in the IDLE state on receiving an event sends the payload to the busy state where it is processed. This is done by the transition_busy method which uses the allowedStateTransitions method.

    def transition_busy(self,reply):
        self.transition(self.allowedStateTransitions.get(
            ‘busy’), payload=reply)

Resources

Understanding the SUSI State Machine – https://blog.fossasia.org/implementing-susi-linux-app-as-a-finite-state-machine/

Audio Structure of SUSI Smart Speaker – https://blog.fossasia.org/audio-structure-of-susi-smart-speaker/

Reactive Python Documentation – https://rxpy.readthedocs.io/en/latest/

Tags

SUSI Smart Speaker, SUSI.AI, FOSSASIA, GSoC19

Continue ReadingHandling Planned Actions for the SUSI Smart Speaker

API’s in SUSI.AI BotBuilder

In this blog, I’ll explain the different API’s involved in the SUSI.AI Bot Builder and its working. Now if you are wondering how the SUSI.AI bot builder works or how drafts are saved, then this is the perfect blog. I’ll be explaining the different API’s grouped by different API endpoints.

API Implementation

fetchChatBots

export function fetchChatBots() {
 const url = `${API_URL}/${CMS_API_PREFIX}/getSkillList.json`;
 return ajax.get(url, {
   private: 1,
 });
}

This API is used to fetch all your saved chatbots, which are displayed on the BotBuilder Page. The API endpoint is getSkillList.json. Same endpoint is used when a user creates a skill, the difference is query parameter private is passed which then returns your chatbots. Now if you are wondering why we have same endpoint for skills and chatbots, the simple plain reason for this is chatbots are your private skills.

fetchBotDetails

export function fetchBotDetails(payload) {
 const { model, group, language, skill } = payload;
 const url = `${API_URL}/${CMS_API_PREFIX}/getSkill.json`;
 return ajax.get(url, {
   model,
   group,
   language,
   skill,
   private: 1,
 });
}

This API is used to fetch details of bot/skill respectively from the API endpoint getSkill.json. Group name, language, skill name, private and model are passed as query parameters.

fetchBotImages

export function fetchBotImages(payload) {
 const { name: skill, language, group } = payload;
 const url = `${API_URL}/${CMS_API_PREFIX}/getSkill.json`;
 return ajax.get(url, {
   group,
   language,
   skill,
   private: 1,
 });
}

This API is used to fetch skill and bot images from the API endpoint getSkill.json. Group name, language, skill name and private are passed as query parameters.

uploadBotImage

export function uploadBotImage(payload) {
 const url = `${API_URL}/${CMS_API_PREFIX}/uploadImage.json`;
 return ajax.post(url, payload, {
   headers: { 'Content-Type': 'multipart/form-data' },
   isTokenRequired: false,
 });
}

This API is used to upload the Bot image to the API endpoint uploadImage.json.The Content-Type entity header is used to indicate the media type of the resource. multipart/form-data means no characters will be encoded. This is used when a form requires a binary data like the contents of a file or image to be uploaded.

deleteChatBot

export function deleteChatBot(payload) {
 const { group, language, skill } = payload;
 const url = `${API_URL}/${CMS_API_PREFIX}/deleteSkill.json`;
 return ajax.get(url, {
   private: 1,
   group,
   language,
   skill,
 });
}

This API is used to delete Skill and Bot from the API endpoint deleteSkill.json.

storeDraft

export function storeDraft(payload) {
 const { object } = payload;
 const url = `${API_URL}/${CMS_API_PREFIX}/storeDraft.json`;
 return ajax.get(url, { object });
}

This API is used to store draft Bot to the API endpoint storeDraft.json. The object passed as parameter has the properties given by the user such as skill name,group etc., while saving the draft.

readDraft

export function readDraft(payload) {
 const url = `${API_URL}/${CMS_API_PREFIX}/readDraft.json`;
 return ajax.get(url, { ...payload });
}

This API is used to fetch draft from the API endpoint readDraft.json. This API is called on the BotBuilder Page where all the saved drafts are shown.

deleteDraft

export function deleteDraft(payload) {
 const { id } = payload;
 const url = `${API_URL}/${CMS_API_PREFIX}/deleteDraft.json`;
 return ajax.get(url, { id });
}

This API is used to delete the saved Draft from the API endpoint deleteDraft.json. It only needs one query parameter i.e. the draft ID.

In conclusion, the above API’s are the backbone of the SUSI.AI Bot Builder. API endpoints in server ensure the user has the same experience across the clients. Do checkout implementation of different API endpoints in server here.

Resources

Continue ReadingAPI’s in SUSI.AI BotBuilder

Implement JWT Refresh token in Open Event Attendee App

In open event attendee android app earlier only access token is used, which causes HTTP 401 many time due to expiry of the token. It needs to re sign-in for the user. Now we have implemented refresh token authorization with the open event server using retrofit and OkHttp. Retrofit is one of the most popular HTTP client for Android. When calling API, we may require authentication using a token. Usually, the token is expired after a certain amount of time and needs to be refreshed using the refresh token. The client would need to send an additional HTTP request in order to get the new token. Imagine you have a collection of many different APIs, each of them requires token authentication. If you have to handle refresh token by modifying your code one by one, it will take a lot of time and of course, it’s not a good solution. In this blog, I’m going to show you how to handle refresh token on each API calls automatically if the token expires.

  • How refresh token works?
  • Add authenticator to OkHttp
  • Network call and handle response
  • Conclusion
  • Resources 

Let’s analyze every step in detail.

How Refresh Token Works?

Whether tokens are opaque or not is usually defined by the implementation. Common implementations allow for direct authorization checks against an access token. That is, when an access token is passed to a server managing a resource, the server can read the information contained in the token and decide itself whether the user is authorized or not (no checks against an authorization server are needed). This is one of the reasons tokens must be signed (using JWS, for instance). On the other hand, refresh tokens usually require a check against the authorization server. 

Add Authenticator to OkHTTP

OkHttp will automatically ask the Authenticator for credentials when a response is 401 Not Authorised retrying last failed request with them.

class TokenAuthenticator: Authenticator {

    override fun authenticate(route: Route?, response: Response): Request? {

        // Refresh your access_token using a synchronous api request
        val newAccessToken = service.refreshToken();

        // Add new header to rejected request and retry it
        return response.request().newBuilder()
            .header(AUTHORIZATION, newAccessToken)
            .build()
    }
}

Add the authenticatior to OkHttp:

val builder = OkHttpClient().newBuilder()
            .authenticator(TokenAuthenticator())

Network Call and Handle Response

API call with retrofit:

@POST("/auth/token/refresh")
    fun refreshToken(): Single<RefreshResponse>

Refresh Response:

@JsonNaming(PropertyNamingStrategy.SnakeCaseStrategy::class)
data class RefreshResponse(
    val refreshToken: String
)

In a Nutshell

Refresh tokens improve security and allow for reduced latency and better access patterns to authorization servers. Implementations can be simple using tools such as JWT + JWS. If you are interested in learning more about tokens (and cookies), check our article here.

Resources

  1. Android – Retrofit 2 Refresh Access Token with OkHttpClient and Authenticator: https://www.woolha.com/tutorials/android-retrofit-2-refresh-access-token-with-okhttpclient-and-authenticator
  2. Refresh Tokens: When to Use Them and How They Interact with JWTs: https://auth0.com/blog/refresh-tokens-what-are-they-and-when-to-use-them/

Tags

Eventyay, open-event, FOSSASIA, GSoC, Android, Kotlin, Refresh tokens

Continue ReadingImplement JWT Refresh token in Open Event Attendee App

Implementation of Paid Route for Event Invoice in Open Event Frontend

The implementation of event invoice tables is already explained in the blog post Implementation of Event Invoice view using Ember Tables. This blog post is an extension which will showcase the implementation of paid route for event invoice in Open Event Frontend. Event invoices can be explained as the monthly fee given by the organizer to the platform for hosting their event.

We begin by defining a route for displaying the paid event invoice. We chose event-invoice/paid route in router.js

this.route('event-invoice', function() {
   ...
   this.route('paid', { path: '/:invoice_identifier/paid' });
 });

Now, we need to finalize the paid route. The route is divided in three sections – 

  • titleToken :  To display title in the tab of the browser.
  • model :  To fetch event-invoice with the given identifier.
  • afterModel : To check if the status of the fetched event -invoice is paid or due. If it is due, it is redirected to review page otherwise it is redirected to paid page.
import Route from '@ember/routing/route';
 
export default class extends Route {
 
 titleToken(model) {
   return this.l10n.tVar(`Paid Event Invoice - ${model.get('identifier')}`);
 }
 
 model(params) {
   return this.store.findRecord('event-invoice', params.invoice_identifier, 
     {
         include : 'event,user',
         reload  : true
     }
   );
 }
 
 afterModel(model) {
   if (model.get('status') === 'due') {
     this.transitionTo('event-invoice.review', model.get('identifier'));
   } else if (model.get('status') === 'paid') {
     this.transitionTo('event-invoice.paid', model.get('identifier'));
   }
 }
}

Now, we need to design the template for paid route. It includes various elements –

  1. Invoice Summary
<div class="ten wide column print">      
    {{event-invoice/invoice-summary data=model                             
        event=model.event                               
        eventCurrency=model.event.paymentCurrency}} 
</div>

The invoice summary contains some of the details of the event invoice. It contains the event name, date of issue, date of completion and amount payable of the event invoice. The snippet for invoice summary can be viewed here

      2.  Event Info :

<div class="mobile hidden six wide column">
  {{event-invoice/event-info event=model.event}}
</div>

The event info section of the paid page contains the description of the event to which the invoice is associated. It contains event location, start date and end date of the event. The snippet for event info can be viewed here

      3.  Billing Info :

<div class="ten wide column">
  {{event-invoice/billing-info user=model.user}}
</div>

The billing info section of the event invoice paid page contains the billing info of the user to which the event is associated and that of the admin. The billing info includes name, email, phone, zip code and country of the user and the admin. The snippet for billing info can be viewed here

      4.  Payee Info :

<div class="mobile hidden row">
   {{event-invoice/payee-info data=model payer=model.user}}
</div>

The payee information displays the name and email of the user who pays for the invoice and also the method of the payment along with relevant information. The snippet for payee info can be viewed here

We can download the invoice of the payment made for the event invoice. This is triggered when the Print Invoice button is clicked. 

Code snippet to trigger the download of the invoice –

 <div class="row">
   <div class="column right aligned">
     <button {{action 'downloadEventInvoice' model.event.name model.identifier }} class="ui labeled icon blue {{if isLoadingInvoice 'loading'}} button">
        <i class="print alternate icon"></i>
        {{t 'Print Invoice'}}
     </button>
   </div>
 </div>

The function downloadEventInvoice is called when the button Print Invoice is clicked. The event name and order ID is passed as parameters to the function. When the invoice pdf generation is successful, message Here is your Event Invoice is displayed on the screen whereas if there is an error, the message Unexpected error occurred is displayed.

@action
 async downloadEventInvoice(eventName, orderId) {
   this.set('isLoading', true);
   try {
     const result = this.loader.downloadFile(`/events/invoices/${this.orderId}`);
     const anchor = document.createElement('a');
     anchor.style.display = 'none';
     anchor.href = URL.createObjectURL(new Blob([result], { type: 'application/pdf' }));
     anchor.download = `${eventName} - EventInvoice-${orderId}.pdf`;
     document.body.appendChild(anchor);
     anchor.click();
     this.notify.success(this.l10n.t('Here is your Event Invoice'));
     document.body.removeChild(anchor);
   } catch (e) {
     console.warn(e);
     this.notify.error(this.l10n.t('Unexpected error occurred.'));
   }
   this.set('isLoading', false);
 }

Resources:

Related work and code repo:

Continue ReadingImplementation of Paid Route for Event Invoice in Open Event Frontend

Implementing a Chat Bubble in SUSI.AI

SUSI.AI now has a chat bubble on the bottom right of every page to assist you. Chat Bubble allows you to connect with susi.ai on just a click. You can now directly play test examples of skill on chatbot. It can also be viewed on full screen mode.

Redux Code

Redux Action associated with chat bubble is handleChatBubble. Redux state chatBubble can have 3 states:

  • minimised – chat bubble is not visible. This set state is set when the chat is viewed in full screen mode.
  • bubble – only the chat bubble is visible. This state is set when close icon is clicked and on toggle.
  • full – the chat bubble along with message section is visible. This state is set when minimize icon is clicked on full screen mode and on toggle.
const defaultState = {
chatBubble: 'bubble',
};
export default handleActions(
 {
  [actionTypes.HANDLE_CHAT_BUBBLE](state, { payload }) {
     return {
       ...state,
       chatBubble: payload.chatBubble,
     };
   },
 },
 defaultState,
);

Speech box for skill example

The user can click on the speech box for skill example and immediately view the answer for the skill on the chatbot. When a speech bubble is clicked a query parameter testExample is added to the URL. The value of this query parameter is resolved and answered by Message Composer. To be able to generate an answer bubble again and again for the same query, we have a reducer state testSkillExampleKey which is updated when the user clicks on the speech box. This is passed as key parameter to messageSection.

Chat Bubble Code

The functions involved in the working of chatBubble code are:

  • openFullScreen – This function is called when the full screen icon is clicked in the tab and laptop view and also when chat bubble is clicked in mobile view. This opens up a full screen dialog with message section. It dispatches handleChatBubble action which sets the chatBubble reducer state as minimised.
  • closeFullScreen – This function is called when the exit full screen icon is clicked. It dispatches a handleChatBubble action which sets the chatBubble reducer state as full.
  • toggleChat –  This function is called when the user clicks on the chat bubble. It dispatches handleChatBubble action which toggles the chatBubble reducer state between full and bubble.
  • handleClose – This function is called when the user clicks on the close icon. It dispatches handleChatBubble action which sets the chatBubble reducer state to bubble.
openFullScreen = () => {
   const { actions } = this.props;
   actions.handleChatBubble({
     chatBubble: 'minimised',
   });
   actions.openModal({
     modalType: 'chatBubble',
     fullScreenChat: true,
   });
 };

 closeFullScreen = () => {
   const { actions } = this.props;
   actions.handleChatBubble({
     chatBubble: 'full',
   });
   actions.closeModal();
 };

 toggleChat = () => {
   const { actions, chatBubble } = this.props;
   actions.handleChatBubble({
     chatBubble: chatBubble === 'bubble' ? 'full' : 'bubble',
   });
 };

 handleClose = () => {
   const { actions } = this.props;
   actions.handleChatBubble({ chatBubble: 'bubble' });
   actions.closeModal();
 };

Message Section Code (Reduced)

The message section comprises of three parts the actionBar, messageSection and the message Composer.

Action Bar

The actionBar consists of the action buttons – search, full screen, exit full screen and close button. Clicking on the search button expands and opens up a search bar. On clicking the full screen icon openFullScreen function is called which open up the chat dialog. On clicking the exit icon the handleClose function is called, which set chatBubble reducer state to bubble. On full screen view, clicking on the exit full screen icon calls the closeFullScreen functions which sets the reducer state chatBubble to full.

   const actionBar = (
     <ActionBar fullScreenChat={fullScreenChat}>
       {fullScreenChat !== undefined ? (
         <FullScreenExit onClick={this.closeFullScreen} width={width} />
       ) : (
         <FullScreen onClick={this.openFullScreen} width={width} />
       )}
       <Close onClick={fullScreenChat ? this.handleClose : this.toggleChat}/>
     </ActionBar>
   );

Message Section

The message section has two parts MessageList and Message Composer. Message List is where the messages are viewed and the Message composer allows you to interact with the bot through text and speech. ScrollBar is imported from the npm library react-custom-scrollbars. When the scroll bar is moved it sets the state of showScrollTop and showScrollBottom in the chat. messageListItems consists of all the messages between the user and the bot.


   const messageSection = (
     <MessageSectionContainer showChatBubble={showChatBubble} height={height}>
       {loadingHistory ? (
         <CircularLoader height={38} />
       ) : (
         <Fragment>
           {fullScreenChat ? null : actionBar}
           <MessageList
             ref={c => {
               this.messageList = c;
             }}
             pane={pane}
             messageBackgroundImage={messageBackgroundImage}
             showChatBubble={showChatBubble}
             height={height}>
             <Scrollbars
               renderThumbHorizontal={this.renderThumb}
               renderThumbVertical={this.renderThumb}
               ref={ref => {
                 this.scrollarea = ref;
               }}
               onScroll={this.onScroll}
               autoHide={false}>
               {messageListItems}
               {!search && loadingReply && this.getLoadingGIF()}
             </Scrollbars>
           </MessageList>
           {showScrollTop && (
             <ScrollTopFab
               size="small"
               backgroundcolor={body}
               color={theme === 'light' ? 'inherit' : 'secondary'}
               onClick={this.scrollToTop}
             >
               <NavigateUp />
             </ScrollTopFab>
           )}
           {showScrollBottom && (
             <ScrollBottomContainer>
               <ScrollBottomFab
                 size="small"
                 backgroundcolor={body}
                 color={theme === 'light' ? 'inherit' : 'secondary'}
                 onClick={this.scrollToBottom}>
                 <NavigateDown />
               </ScrollBottomFab>
             </ScrollBottomContainer>
           )}
         </Fragment>
       )}
       <MessageComposeContainer
         backgroundColor={composer}
         theme={theme}
         showChatBubble={showChatBubble}>
         <MessageComposer
           focus={!search}
           dream={dream}
           speechOutput={speechOutput}
           speechOutputAlways={speechOutputAlways}
           micColor={button}
           textarea={textarea}
           exitSearch={this.exitSearch}
           showChatBubble={showChatBubble}
         />
       </MessageComposeContainer>
     </MessageSectionContainer>
   );

 const Chat = (
     <ChatBubbleContainer className="chatbubble" height={height} width={width}>
       {chatBubble === 'full' ? messageSection : null}
       {chatBubble !== 'minimised' ? (
         <SUSILauncherContainer>
           <SUSILauncherWrapper
             onClick={width < 500 ? this.openFullscreen : this.toggleChat}>
             <SUSILauncherButton data-tip="Toggle Launcher" />
           </SUSILauncherWrapper>
         </SUSILauncherContainer>
       ) : null}
     </ChatBubbleContainer>
   );

Resources

Continue ReadingImplementing a Chat Bubble in SUSI.AI

Integrating the SUSI.AI Web Client with the Smart Speaker

In the previous versions of the firmware for the smart speaker, we had a separate flask server for serving the configuration page and another flask server for serving the control page. This puts forward inconsistencies in the frontend. To make the frontend and the user experience the same across all platforms, the SUSI.AI Web Client is now integrated into the smart speaker.

Now whenever the device is in Access Point mode (Setup mode), and the user accesses the web client running on the smart speaker, the device configuration page is shown.

If the device is not in access point mode, the normal Web Client is shown with a green bar on top saying “You are currently accessing the local version of your SUSI.AI running on your smart device. Configure now”
Clicking on the Configure now link redirects the user to the control page, where the user can change the settings for their smart speaker and control media playback.


To integrate both the control web page and the configure web page in the web client we needed to combine the flask server for the control and the configure page. This is done by adding all the endpoints of the configure server to the sound server and then renaming the sound server to control server – Merge Servers

Serving the WebClient through the control server


The next step is to add the Web Client static code to the control server and make it available on the root URL of the server.
To do this, first, we have to clone the web client during the installation and add the required symbolic links for the server.

echo "Downloading: Susi.AI webclient"
if [ ! -d "susi.ai" ]
then
    git clone --depth 1 -b gh-pages https://github.com/fossasia/susi.ai.git
    ln -s $PWD/susi.ai/static/* $PWD/susi_installer/raspi/soundserver/static/
    ln -s $PWD/susi.ai/index.html $PWD/susi_installer/raspi/soundserver/templates/
else
    echo "WARNING: susi.ai directory already present, not cloning it!" >&2
fi


The next step is to add the route for the Web Client in the control flask server.


@app.route('/')
def index():
    return render_template('index.html')


Here the index.html file is of the Web Client which we linked to the templates folder in the previous step.


Connecting the web client to the locally running SUSI Server


The Web Client by default connects to the SUSI Server at https://api.susi.ai but however the smart speaker has its own server running. To connect to the local server we do the following

– Before building the local version from the source of web client we need to set the process.env.REACT_APP_LOCAL_ENV environment variable to true.

– While building the javascript function check() checks if the above-mentioned environment variable is set to true and if yes, sets the API_URL to http://<host_IP_address>:4000

function check() {
  if (process && process.env.REACT_APP_LOCAL_ENV === 'true') {
    return 'http://' + window.location.hostname + ':4000';
  }
  return 'https://api.susi.ai';
}

const urls = {
  API_URL: check(),
  .  .//rest URLs
};

Resources

Javascript window.location – https://developer.mozilla.org/en-US/docs/Web/API/Window/location

Symlinks Linux – https://www.shellhacks.com/symlink-create-symbolic-link-linux/
SUSI.AI Web Client documentation – https://github.com/fossasia/susi.ai/blob/master/README.md

Tags

SUSI Smart Speaker, SUSI.AI, FOSSASIA, GSoC19

Continue ReadingIntegrating the SUSI.AI Web Client with the Smart Speaker

PSLab Desktop Architecture

In this blog post we will talk about the software architecture that we have followed in PSLab Desktop. We will discuss in detail about how the three components, ie – Electron, React and Python gel together in our architecture and give the user a smooth, high performance, cross platform experience.

The picture below is all there is to the PSLab Desktop architecture. I have numbered all the main components and will break them down in later section of the article. Once you get this architecture going on in the back of your head while writing the app, you would be able to think clearly and code more efficiently.


An overview of the whole architecture

I am sure there are many other approaches available to build an electron app, but for the stack I used, this architecture turned out to be the only working model that went all the way from development to production and proved its stability after a lot of experimentation.

The PSLab Desktop happens to be a software interface that needs to interact and fetch data from an actual hardware device at a very fast rate and display the results in real time. The stability of this architecture has been tested under various use cases of the PSLab Desktop.


Let’s break it down

There are 4 major parts in the whole architecture, let us talk about them one by one.

1. Main Process

If you read the documentation of electron, it states that the purpose of the main process is to create and manage browser windows. Each browser window can then run its own Web page/GUI. As this process is kind of central to managing the whole app, there can be just one and only one main process throughout an electron app life cycle.

The main process has a secondary yet important task of communicating with the OS to make use of native GUI features like notifications, creating shortcuts, responding to auto-updates and so on.

It also plays a key role in communication, acting as the middle man for all the browser windows it has created. This is in fact the backbone of the whole architecture we are relying on.

2. Renderer Process (Visible)

Every browser window that the main process creates gives rise to a renderer process. For the sake of simplicity, let us call each browser window to be equivalent to a renderer process. The only purpose of a renderer process is to show a webpage. In other words, its task is to render the GUI.

It is also important to note that each browser window that the main process generates runs independently much like tabs on our browser. Even if a tab crashes, the rest of the tabs keep running. This very fundamental feature gives us limitless powers to create stuff.

Q1. Can I make use of native OS features directly from renderer process?

No, you cannot. This is due to security reasons and thus to make use of native OS features, every render process has to communicate with the main process and as it to get the job done on its behalf.

Note: Actually you can access the main process features directly from renderer process by something called the remote module but that is not considered a healthy practice as a careless code may end up leaking resources.

Everything that you need to know about main and renderer process has been mentioned in the electron documentation.

You may have noticed that I have written the word visible separately for the renderer process. Why is that? Because, this is the only renderer that will be visible. As we have mentioned before, we are using React as one of our tech stack elements which is a great tool for writing Single Page Applications. And as you may have guessed, unlike old web apps, Single page Application do not need to open several tabs, rather, it is like a mobile app that gives the user a smooth experience on a single screen/tab. Thus, we will only need one tab to actually render the whole UI for the app. One tab equals one renderer process.

3. Renderer Process (Hidden)

But you just said that we only need one renderer process!!

Well I said only one visible process. Now if you were to make a simple GUI focused app, chances are you’ll never need point number 3 but we are here to push the electron app to new limits.

What happens when you want to perform a thread blocking ( heavy computation ) task? Where would you like to perform it? Main process? Nope, your app will crash if you block the main process. Visible renderer process? Nope, your GUI will start to lag and choke. So where do you do it? Yup, you guessed it, this is where the hidden renderer process comes in.

The hidden process is just another tab/window that is inherently hidden from the user. This is because our goal is not to render a UI in it but to perform a thread blocking task. The idea is to create and destroy hidden windows as and when needed in order to do heavy lifting in the background. Thus to do a thread blocking task, you would simply create a hidden window, finish the task in that hidden window ( running in the background ) without causing any issues with the visible components of your app and when you are done, just destroy the hidden window to claim back your resources. It is that simple.

The best part is you can create as many as you want.

This is the article I read to decide upon the approach I would adopt to tackle CPU intensive background tasks. It would be an interesting read if you want to compare this approach to the other options I had.

4. Python Script

By now you already know that Python is one of the key elements of our teach stack so we have to find a way to make it work effortlessly with the electron code. So the simplest way to do this was making use of an npm module called python-shell. This allows us to execute external python scripts from nodeJS application. As you are allowed to use nodeJS features in an electron app, thus the npm modules will work well in your electron code too.

Q1. From where do you call it?

ans. Obviously if you are using python for performing some task, you are planning to do some heavy lifting. But heavy lifting or not, the ideal spot to run the python script is from hidden renderer processes.

It is important to note that the script is not running inside your electron app, rather running in parallel. This diagram should help you visualize it better.


Script initialization using render process

Now obviously your python script may need some input and will most certainly generate some output when it runs. So as the script runs outside your app, you cannot directly grab or send data to the script. But python-shell has a nice feature which can be used to deal with this situation. We will talk about it in the next section.

The way you would design the script itself is again a topic for discussion which will be covered as we go into the details of coding.

Communications

An important point of discussion is how will each of these four components talk to each other. And to be frank, we just need to take care of three types of communication in our app.

1. Main — Renderer :

The actual means of communication is called the electron IPC (Inter Process Communication). The renderer processes use the ipcRenderer while the main process uses the ipcMain.

Now if you were to ask how it actually looks like then the best analogy is event listeners. You just set up a bunch of listeners on both ends depending on what you plan to listen for. Later on in the app life cycle, you can trigger these listeners manually to get the desired output.


Placing listeners on Main and render threads for bouncing messages

This idea can be extended to any number of render and main process combination. Under the hood, this is all there is.

2. Visible Renderer— Hidden Renderer :

There is no direct means of communication between two renderer processes. So the hidden renderer process cannot directly talk to the visible renderer process. The only way to actually establish a communication is by bouncing messages off the main process much like a relay system.


Hidden render – Visible render process communication

3. Hidden Renderer — Python Script :

The python-shell library provides us a way to make use of stdin and stdout channels for sending and receiving data from the python script.

So the best way to write your python code would be to assume that you would be using it from the terminal where you directly read data from user input and print data out to the terminal using simple print statement.

The message format that we will be using is JSON (JavaScript Object Notation) although many other options are available. JSON actually makes it easier to structure data which in turn makes it really easy to extract data on both ends with very minimal overhead.


Script-Hidden render communication

Note: Keep in mind that I have mentioned that this is the message format I use; don’t end up relating this to HTTP request-response because the term JSON is used quite heavily in that domain.

Resources:

Continue ReadingPSLab Desktop Architecture

Introduction of event owner role in Open Event

This blog post will showcase introduction of new owner role for users in Open Event Frontend. Now a user associated with organizing of an event can have any of the following roles:

  1. Owner
  2. Organizer
  3. Co-organizer
  4. Track-organizer
  5. Moderator
  6. Registrar

Till now, the user creating the event had organizer role which was not exclusive. An organizer can invite other users to be organizer. So, later we couldn’t give exclusive rights to the event creator due to this.

But there can only be a single owner of an event. So, the introduction of new owner role will help us distinguish the owner and give him/her exclusive rights for the event.
This refactor involved a lot of changes. Let’s go step by step:

  • I updated the role of the user creating the event to be owner by default. For this, we query the user and owner role and use it to create a new UserEventRoles object and save it to database. Then we create a role invite object using the user email, role title, event id and role id.
def after_create_object(self, event, data, view_kwargs):
  ...
       user = User.query.filter_by(id=view_kwargs['user_id']).first()
       role = Role.query.filter_by(name=OWNER).first()
       uer = UsersEventsRoles(user, event, role)
       save_to_db(uer, 'Event Saved')
       role_invite = RoleInvite(user.email, role.title_name, event.id,
                     role.id, datetime.now(pytz.utc), status='accepted')
       save_to_db(role_invite, 'Owner Role Invite Added')
  • We included a new function is_owner to permission_manager helper which checks if the current_user is owner of the event passed in kwargs. If the user is not the owner, ForbiddenError is returned.
@jwt_required
def is_owner(view, view_args, view_kwargs, *args, **kwargs):
   user = current_user
   if user.is_staff:
       return view(*view_args, **view_kwargs)
   if not user.is_owner(kwargs['event_id']):
       return ForbiddenError({'source': ''}, 'Owner access is required').respond()
   return view(*view_args, **view_kwargs)
  • Updated event schema to add new owner fields and relationship. We updated the fields –
  1. organizer_name             -> owner_name
  2. has_organizer_info        -> has_owner_info
  3. organizer_description    -> owner_description

We also included owner relationship in the EventSchemaPublic

@use_defaults()
class EventSchemaPublic(SoftDeletionSchema):
   ...
   owner_name = fields.Str(allow_none=True)
   has_owner_info = fields.Bool(default=False)
   owner_description = fields.Str(allow_none=True)

   ...
   owner = Relationship(attribute='owner',
                        self_view='v1.event_owner',
                        self_view_kwargs={'id': '<id>'},
                        related_view='v1.user_detail',
                        schema='UserSchemaPublic',
                        related_view_kwargs={'event_id': '<id>'},
                        type_='user')
  • To accommodate the introduction of owner role, we have to introduce a new boolean field is_user_owner and a new relationship owner_events to the UserSchema. The relationship owner_events can be used to fetch lit of events of which a given user is the owner.
class UserSchema(UserSchemaPublic):
   ...
   is_user_owner = fields.Boolean(dump_only=True)
   ...
   owner_events = Relationship(
       self_view='v1.user_owner_event',
       self_view_kwargs={'id': '<id>'},
       related_view='v1.event_list',
       schema='EventSchema',
       many=True,
       type_='event')
  • Similarly, we need to update Event model too. A new owner relationship is introduced to the event model which is related to User. It basically stores the owner of the event.
    We then introduce a new function get_owner( ) to the model which iterates through all the roles and return the user if the role is the owner.
class Event(SoftDeletionModel):
   ...
   owner = db.relationship('User',
             viewonly=True,
             secondary='join(UsersEventsRoles, Role, and_(Role.id ==
                        UsersEventsRoles.role_id, Role.name == "owner"))',
             primaryjoin='UsersEventsRoles.event_id == Event.id',
             secondaryjoin='User.id == UsersEventsRoles.user_id',
             backref='owner_events',
             uselist=False)

Resources:

Related work and code repo:

Continue ReadingIntroduction of event owner role in Open Event

List SUSI.AI Devices in Admin Panel

In this blog I’ll be explaining about the Devices Tab in SUSI.AI Admin Panel. Admins can now view the connected devices of the users with view, edit and delete actions. Also the admins can directly view the location of the device on the map by clicking on the device location of that user.

Implementation

List Devices

Admin device Tab

Devices tab displays device name, macId, room, email Id, date added, last active, last login IP and location of the device. loadDevices function is called on componentDidMount which calls the fetchDevices API which fetches the list of devices from /aaa/getDeviceList.json endpoint. List of all devices is stored in devices array. Each device in the array is an object with the above properties. Clicking on the device location opens a popup displaying the device location on the map.

loadDevices = () => {
   fetchDevices()
     .then(payload => {
       const { devices } = payload;
       let devicesArray = [];
       devices.forEach(device => {
         const email = device.name;
         const devices = device.devices;
         const macIdArray = Object.keys(devices);
         const lastLoginIP =
           device.lastLoginIP !== undefined ? device.lastLoginIP : '-';
         const lastActive =
           device.lastActive !== undefined
             ? new Date(device.lastActive).toDateString()
             : '-';
         macIdArray.forEach(macId => {
           const device = devices[macId];
           let deviceName = device.name !== undefined ? device.name : '-';
           deviceName =
             deviceName.length > 20
               ? deviceName.substr(0, 20) + '...'
               : deviceName;
           let location = 'Location not given';
           if (device.geolocation) {
             location = (
               
                 {device.geolocation.latitude},{device.geolocation.longitude}
               
             );
           }
           const dateAdded =
             device.deviceAddTime !== undefined
               ? new Date(device.deviceAddTime).toDateString()
               : '-';
 
           const deviceObj = {
             deviceName,
             macId,
             email,
             room: device.room,
             location,
             latitude:
               device.geolocation !== undefined
                 ? device.geolocation.latitude
                 : '-',
             longitude:
               device.geolocation !== undefined
                 ? device.geolocation.longitude
                 : '-',
             dateAdded,
             lastActive,
             lastLoginIP,
           };
           devicesArray.push(deviceObj);
         });
       });
       this.setState({
         loadingDevices: false,
         devices: devicesArray,
       });
     })
     .catch(error => {
       console.log(error);
     });
 };

View Device

User Device Page

View action redirects to users /mydevices?email<email>&macid=<macid>. This allows admin to have full control of the My devices section of the user. Admin can change device details and delete device. Also admin can see all the devices of the user from the ALL tab. To edit a device click on edit icon in the table, update the details and click on check icon. To delete a device click on the delete device which then asks for confirmation of device name and on confirmation deletes the device.

Edit Device

Edit Device Dialog

Edit actions opens up a dialog modal which allows the admin to update the device name and room. Clicking on the edit button calls the modifyUserDevices API which takes in email Id, macId, device name and room name as parameters. This calls the API endpoint /aaa/modifyUserDevices.json.

 handleChange = event => {
   this.setState({ [event.target.name]: event.target.value });
 };

 render() {
   const { macId, email, handleConfirm, handleClose } = this.props;
   const { room, deviceName } = this.state;
   return (
     <React.Fragment>
       <DialogTitle>Edit Device Details for {macId}</DialogTitle>
       <DialogContent>
         <OutlinedTextField
           value={deviceName}
           label="Device Name"
           name="deviceName"
           variant="outlined"
           onChange={this.handleChange}
           style={{ marginRight: '20px' }}
         />
         <OutlinedTextField
           value={room}
           label="Room"
           name="room"
           variant="outlined"
           onChange={this.handleChange}
         />
       </DialogContent>
       <DialogActions>
         <Button
           key={1}
           color="primary"
           onClick={() => handleConfirm(email, macId, room, deviceName)}>
           Change
         </Button>
         <Button key={2} color="primary" onClick={handleClose}>
           Cancel
         </Button>
       </DialogActions>
     </React.Fragment>
   );
 }

Delete Device

Delete Device Dialog

Delete action opens up a confirm delete dialog modal. To delete a device enter the device name and click on delete. This calls the confirmDelete function which calls the removeUserDevice API which takes in email Id and macId as parameters. This API hits the endpoint /aaa/removeUserDevices.json.

confirmDelete = () => {
   const { actions } = this.props;
   const { macId, email } = this.state;
   removeUserDevice({ macId, email })
     .then(payload => {
       actions.openSnackBar({
         snackBarMessage: payload.message,
         snackBarDuration: 2000,
       });
       actions.closeModal();
       this.setState({
         loadingDevices: true,
       });
       this.loadDevices();
     })
     .catch(error => {
       actions.openSnackBar({
         snackBarMessage: `Unable to delete device with macID ${macId}. Please try again.`,
         snackBarDuration: 2000,
       });
     });
 };

To conclude, admin can now view all the connected SUSI.AI devices along with the user details and location. They can also access users My Devices tab in Dashboard and update and delete devices.

Resources

Continue ReadingList SUSI.AI Devices in Admin Panel