Allow Same Discount/Access Code for Multiple Events in the Open Event Server

In this Blog-Post, I will show how to allow the system to create the same Discount/Access Code for multiple events in the Open Event Server.

What was the issue:

The main problem was that the server used to identify the discount code and access code based on the discount code/access code itself, which did not allow multiple events to have the same discount/access codes.

Can you think of a better solution to this?
Yes, we should have been searching for it based on the discount/access code as well as the event they are associated with.

Changing the endpoint:

Now to do so, we want to pass the id of the event as well as discount/access code itself with the endpoint so that we can search the database based on the event_id and the code itself.  

Changes in Discount/Access Code Endpoint:

'/event/<int:discount_event_id>/discount-code/<code>'
'/event/<int:access_event_id>/access-code/<code>'

Change logic for database search:

Now when searching for discount/access code in the database, we need to pass the event_id along with the discount/access code, so that we can get the column of discount/access code associated with that event, even if we have multiple discount/access code with the same name for a different event. 

Changes in Database search logic:

access = db.session.query(AccessCode).filter_by(code=kwargs.get('code')
event_id = kwargs.get('access_event_id')).first()

discount = db.session.query(DiscountCode).filter_by(code=kwargs.get('code'),
event_id = kwargs.get('discount_event_id')).first()

Change endpoint in API docs and update Dredd hooks:

Now that we have changed the endpoint to get a discount/access code, we need to change API docs as well as Dredd hooks to accommodate the change in API docs.

Changes in API docs:

## Get Discount Code Detail using the code [/v1/event/{event_id}/discount-code/{code}]
## Get Access Code Detail using the code [/v1/event/{event_id}/access-code/{code}]

Changes in Dredd Hooks:

In discount code hook:

discount_code.event_id = 1

In access code hook:

event = EventFactoryBasic()
db.session.add(event)
db.session.commit()

Resources:

Link to Issue: fossasia/open-event-server#6027
Link to PR: fossasia/open-event-server#6208

Continue ReadingAllow Same Discount/Access Code for Multiple Events in the Open Event Server

Join Codeheat Coding Contest 2019/20

Master Git, contribute to Open Source, and win a trip to the FOSSASIA Summit Singapore with Codeheat! Codeheat is the annual coding contest for developers to contribute to Free and Open Source software (FOSS) and open hardware projects of FOSSASIA. Join development of real world software applications and win awesome prizes, build up your developer profile, learn new coding skills, collaborate with the community and make new friends from around the world! Sign up now for the fourth edition of Codeheat on the website and follow Codeheat on Twitter.

Start date: September 15, 2019

End date: February 2, 2020

Which Projects Participate

Open Event – Eventyay / Code / Chat

SUSI AI – Website / Code / Chat

Pocket Science Lab (PSLab) – Website / Code / Chat

Phimpme Android – App / Code / Chat

Meilix Linux Distribution – Code / Chat

Voicerepublic – Website / Code / Chat

Badge Magic- App / Code / Chat

Neurolab – Code / Chat

Badgeyay – Website / Code / Chat

How to Join the Contest

  • The contest is open to everyone.
  • Participants can join at any time
  • Register on the site and check out the Frequently Asked Questions for more details.
  • Also join the FOSSASIA Gitter chat and communicate with mentors and follow developers on project specific channels. 

What are the Prizes

  • Winners (3 prizes): Listed on website, certificate, 600SGD travel voucher, 5-night accommodation in Singapore, Tshirt and FOSSASIA limited edition swags. 
  • Finalist (7 prizes): Listed on website, certificate, travel voucher of 100 SGD, Tshirt and FOSSASIA limited edition swags. 
  • Active Contributors (unlimited): Certificate, CodeHeat Tshirt and FOSSASIA limited edition of swags (with at least 10 merged pull requests)
  • Community Participants (unlimited): Digital Certificate of Participation (with at least 5 merged pull requests)

What are the Judging Criteria

Our jury will review the work of the 10 developers who have the highest number of quality contributions during the contest. Contributions include pull requests/code commits, scrum reports, articles, screencasts, community engagement and outreach activities. The mentors will look at the:  

Sustainability, which means that we specifically value contributions that make the project sustainable by building a community where developers collaborate with each other in a friendly way and support the project development through peer reviews, on-boarding new members, and helping fellow contributors. It also means that, while code is the most important success criteria for winning the contest, furthermore we are looking for contributions in other areas to make projects easy to join, to deploy and to use. This includes:

  • creating and enhancing documentation
  • developing how-tos
  • writing technical blog posts
  • sharing work in regular scrum updates to enhance communication
  • organizing local meetups, workshops, presentations 

Quality vs. Quantity: The sheer number of pull requests is not the only criteria for choosing the winners. Quality work is appreciated – some issues are more challenging than others just by their nature (for example, heavy coding versus solving a text typo bug). It is entirely possible that someone who completed 53 issues could be chosen as a winner over someone who completed 88 issues.

How Are the Winners Decided

  • Grand Prize Winners: Three developers will be selected by mentors from the top 10 contributors according to code quality, relevance of commits and contributions that help to bring the project forward.  
  • Finalist Winners: After the grand prize winners are selected, the remaining seven contributors of top the 10 will receive finalist winner prizes.
  • Other contributors who have more than 10 merged pull requests during the contest will receive a Thank you package. Anyone who has 5 pull requests merged will receive a digital certificate.

Links

Website: codeheat.org

Codeheat Twitter: twitter.com/codeheat_

FOSSASIA Twitter: twitter.com/fossasia

Codeheat Facebook: facebook.com/codeheat.org

Continue ReadingJoin Codeheat Coding Contest 2019/20

Implementation of Shimmer Effect in Layouts in SUSI.AI Android App

The shimmer effect was created by Facebook to indicate the loading of data in pages where data is being loaded from the internet. This was created as an alternative for the existing ProgressBar and the usual loader to give better user experience with UI.

Let’s get started to see how we can implement it. Here, I am going to use SUSI.AI (a smart assistant app) as a reference app to show a code demonstration. I am working on this project in my GSoC period and while working I found the need to implement this feature in many places. So, I am writing this blog to share my experience with how, I implemented it in the app.

First of all, we need to add the shimmer dependency in the app level Gradle file.

Now, we need to create a placeholder layout simply by using views. This placeholder should resemble the actual layout. Usually, grey-colored is preferred in the placeholder background. A placeholder should not have any text written. It should be viewed only. Let’s consider the placeholder used in susi.

Now let’s have a glance at the actual items whose placeholders we have made.

Now, after the creation of the placeholder, we need to add this placeholder in the main layout file. It is done in the following way:

Here, I have added the placeholders 6 times so that the entire screen gets covered up. You can add it as many times as you want.

The next and the final task is to start and stop the shimmer effect according to the logic of the code. Here, the shimmer starts as soon as the fragment is created and stops when the data is successfully loaded from the server. Have a look at how to create the reference.

First of all, we need to create a reference to the shimmer. Then we use this reference to start/stop the shimmer effect. Here, in Kotlin we can directly use the id used in layout without creating any reference.

We start the shimmer effect simply by using startShimmer() function in the shimmer reference.

Similarly, we can stop it using stopShimmer() function in the reference.

Resources: 

Framework: Shimmer in Android

Documentation: ShimmerAndroid Design

SUSI.AI Android App: PlayStore GitHub

Tags:

SUSI.AI Android App, Kotlin, SUSI.AI, FOSSASIA, GSoC, Android, Shimmer

Continue ReadingImplementation of Shimmer Effect in Layouts in SUSI.AI Android App

Gestures in SUSI.AI Android

Gestures have become one of the most widely used features by a user. The user usually, expects that some tasks should be performed by the app when he or she executes some gestures on the screen.

A “touch gesture” occurs when a user places one or more fingers on the touch screen, and your application interprets that pattern of touches as a particular gesture. There are correspondingly two phases to gesture detection:

  1. Gather data about touch events.
  2. Interpret the data to see if it meets the criteria for any of the gestures your app supports.

There are various kinds of gestures supported by android. Some of them are:

  • Tap
  • Double Tap
  • 2-finger Tap
  • 2-finger-double tap
  • 3-finger tap
  • Pinch

In this post, we will go through the SUSI.AI android app (a smart assistant app) which has the “Right to left swipe” gesture detector in use. When such kind of gesture is detected inside the Chat Activity, it opens the Skill’s Activity. This makes the app very user-friendly. Before we start implementing the code,  go through the steps mentioned above in detail.

1st Step “Gather Data”: 

When a user places one or more fingers on the screen, this triggers the callback onTouchEvent() on the View that received the touch events. For each sequence of touch events (position, pressure, size, the addition of another finger, etc.) that is ultimately identified as a gesture, onTouchEvent() is fired several times.

The gesture starts when the user first touches the screen, continues as the system tracks the position of the user’s finger(s), and ends by capturing the final event of the user’s fingers leaving the screen. Throughout this interaction, the MotionEvent delivered to onTouchEvent() provides the details of every interaction. Your app can use the data provided by the MotionEvent to determine if a gesture it cares about happened.

2nd Step “Data Interpretation”:

The data received needs to be properly interpreted. The gestures should be properly recognized and processed to perform further actions. Like an app might have different gestures integrated into the same page live “Swipe-to-refresh”, “Double-tap”, “Single tap”, etc. Upon successfully differentiating this kind of gesture, further functions/tasks should be executed.

Let’s go through the code present in SUSI now.

First of all, a new class is created here “CustomGestureListener”. This class extends the “SimpleOnGestureListener” which is a part of the “GestureDetector” library of android. This class contains a function “onFling”. This function determines the gestures across the horizontal axis. event1.getX(), and event2.getX() functions says about the gesture values across the horizontal axis of the device. Here, when the value of X becomes getter than 0, it actually indicates that the user has swiped from right to left. This becomes active even in very minor change, which users might have presses accidentally, or has just touched the screen. So to avoid such minor impulses, we set a value that we will execute our task only when the value of X lies between 100 and 1000. This avoids minor gestures.

Inside the onCreate method, a new CustomGestureListener instance is created, passing through a reference to the enclosing activity and an instance of our new CustomGestureListener class as arguments. Finally, an onTouchEvent() callback method is implemented for the activity, which simply calls the corresponding onTouchEvent() method of the ScaleGestureDetector object, passing through the MotionEvent object as an argument.

Summary:

Gestures are usually implemented to enhance the user experience while using the application. Though there are some predefined gestures in Android, we can also create gestures of our own and use them in our application.

Resources: 

Documentation: Gestures

Reference: Gesture

SUSI.AI Android App: PlayStore GitHub

Tags:

SUSI.AI Android App, Kotlin, SUSI.AI, FOSSASIA,GSoC, Android, Gestures

Continue ReadingGestures in SUSI.AI Android

Creating an awesome ‘About Us’ page for the Open Event Organizer Android App

Open Event Organizer App (Eventyay Organizer App) is an Android app based on the Eventyay platform. It contains various features using which organizers can manage their events.

This article will talk about a library which can help you create great about pages for Android apps without the need of making custom layouts.

It is the Android About Page library.

Let’s go through the process of its implementation in the Eventyay Organizer App.

First add the dependency in the app level build.gradle file:

implementation 'com.github.medyo:android-about-page:1.2.5'

Creating elements to be added:

Element legalElement = new Element();
legalElement.setTitle("Legal");

Element developersElement = new Element();      
developersElement.setTitle(getString(R.string.developers));

Element shareElement = new Element();
shareElement.setTitle(getString(R.string.share));

Element thirdPartyLicenses = new Element();       
thirdPartyLicenses.setTitle(getString(R.string.third_party_licenses));

Setting image, description and adding items in the About Page:

AboutPage aboutPage = new AboutPage(getContext())
            .isRTL(false)
            .setImage(R.mipmap.ic_launcher)            
            .setDescription(getString(R.string.about_us_description))
            .addItem(new Element("Version " + BuildConfig.VERSION_NAME, R.drawable.ic_info))
            .addGroup("Connect with us")
            .addGitHub("fossasia/open-event-organizer-android")
            .addPlayStore(getContext().getPackageName())
            .addWebsite(getString(R.string.FRONTEND_HOST))
            .addFacebook(getString(R.string.FACEBOOK_ID))
            .addTwitter(getString(R.string.TWITTER_ID))
            .addYoutube(getString(R.string.YOUTUBE_ID))
            .addItem(developersElement)
            .addItem(legalElement)
            .addItem(shareElement);

if (BuildConfig.FLAVOR.equals("playStore")) {    
    aboutPage.addItem(thirdPartyLicenses);
}

View aboutPageView = aboutPage.create();

Now add the aboutPageView in the fragment.

To make the values configurable from build.gradle, add this is the defaultConfig:

resValue "string", "FACEBOOK_ID", "eventyay"
resValue "string", "TWITTER_ID", "eventyay"
resValue "string", "YOUTUBE_ID", "UCQprMsG-raCIMlBudm20iLQ"

That’s it! The About Page is now ready.

Resources:

Library used: Android About Page

Pull Request: #1904

Open Event Organizer App: Project repo, Play Store, F-Droid

Continue ReadingCreating an awesome ‘About Us’ page for the Open Event Organizer Android App

Mapbox implementation in Open Event Organizer Android App

Open Event Organizer Android App is used by event organizers to manage events on the Eventyay platform. While creating or updating an event, location is one of the important factors which needs to be added so that the attendees can be informed of the venue.

Here, we’ll go through the process of implementing Mapbox Places Autocomplete for event location in the F-Droid build variant.

The first step is to create an environment variable for the Mapbox Access Token. 

def MAPBOX_ACCESS_TOKEN = System.getenv('MAPBOX_ACCESS_TOKEN') ?: "YOUR_ACCESS_TOKEN"

Add the Mapbox dependency:

fdroidImplementation 'com.mapbox.mapboxsdk:mapbox-android-plugin-places-v8:0.9.0'

Fetching the access token in EventDetailsStepOne as well as UpdateEventFragment:

ApplicationInfo applicationInfo = null;
        try {
            applicationInfo = getContext().getPackageManager().getApplicationInfo(getContext().getPackageName(), PackageManager.GET_META_DATA);
        } catch (PackageManager.NameNotFoundException e) {
            Timber.e(e);
        }
        Bundle bundle = applicationInfo.metaData;

        String mapboxAccessToken = bundle.getString(getString(R.string.mapbox_access_token));

The app should not crash if the access token is not available. To ensure this, we need to put a check. Since, the default value of the access token is set to “YOUR_ACCESS_TOKEN”, the following code will check whether a token is available or not:

if (mapboxAccessToken.equals("YOUR_ACCESS_TOKEN")) {
    ViewUtils.showSnackbar(binding.getRoot(),                             R.string.access_token_required);
    return;
}

Initializing the PlacesAutocompleteFragment:

PlaceAutocompleteFragment autocompleteFragment = PlaceAutocompleteFragment.newInstance(
                mapboxAccessToken, PlaceOptions.builder().backgroundColor(Color.WHITE).build());

getFragmentManager().beginTransaction()
    .replace(R.id.fragment, autocompleteFragment)
    .addToBackStack(null)
    .commit();

Now, a listener needs to be set up to get the selected place and set the various fields like latitude, longitude, location name and searchable location name.

autocompleteFragment.setOnPlaceSelectedListener(new PlaceSelectionListener() {
                @Override
                public void onPlaceSelected(CarmenFeature carmenFeature) {
                    Event event = binding.getEvent();
                    event.setLatitude(carmenFeature.center().latitude());
                    event.setLongitude(carmenFeature.center().longitude());
                    event.setLocationName(carmenFeature.placeName());
                    event.setSearchableLocationName(carmenFeature.text());
                    binding.form.layoutLocationName.setVisibility(View.VISIBLE);
                    binding.form.locationName.setText(event.getLocationName());
                    getFragmentManager().popBackStack();
                }

                @Override
                public void onCancel() {
                    getFragmentManager().popBackStack();
                }
            });

This brings the process of implementing Mapbox SDK to completion.

GIF showing the working of Mapbox Places Autocomplete

Resources:

Documentation: Mapbox Places Plugin

Open Event Organizer App: Project repo, Play Store, F-Droid

Continue ReadingMapbox implementation in Open Event Organizer Android App

Implementation of Android App Links in Open Event Organizer App

Android App Links are HTTP URLs that bring users directly to specific content in an Android app. They allow the website URLs to immediately open the corresponding content in the related Android app.

Whenever such a URL is clicked, a dialog is opened allowing the user to select a particular app which can handle the given URL.

In this blog post, we will be discussing the implementation of Android App Links for password reset in Open Event Organizer App, the Android app developed for event organizers using the Eventyay platform.

What is the purpose of using App Links?

App Links are used to open the corresponding app when a link is clicked.

  • If the app is installed, then it will open on clicking the link.
  • If app is not installed, then the link will open in the browser.

The first steps involve:

  1. Creating intent filters in the manifest.
  2. Adding code to the app’s activities to handle incoming links.
  3. Associating the app and the website with Digital Asset Links.

Adding Android App Links

First step is to add an intent-filter for the AuthActivity.

<intent-filter>
    <action android:name="android.intent.action.VIEW" />

    <category android:name="android.intent.category.DEFAULT" />
     <category android:name="android.intent.category.BROWSABLE" />

     <data
         android:scheme="https"
         android:host="@string/FRONTEND_HOST"
         android:pathPrefix="/reset-password" />
</intent-filter>

Here, FRONTEND_HOST is the URL for the web frontend of the Eventyay platform.

This needs to be handled in AuthActivity:

@Override
protected void onNewIntent(Intent intent) {
    super.onNewIntent(intent);
    handleIntent(intent);
}
private void handleIntent(Intent intent) {
    String appLinkAction = intent.getAction();
    Uri appLinkData = intent.getData();

    if (Intent.ACTION_VIEW.equals(appLinkAction) && appLinkData != null) {
        LinkHandler.Destination destination = LinkHandler.getDestinationAndToken(appLinkData.toString()).getDestination();
        String token = LinkHandler.getDestinationAndToken(appLinkData.toString()).getToken();

        if (destination.equals(LinkHandler.Destination.RESET_PASSWORD)) {
            getSupportFragmentManager().beginTransaction()
            .replace(R.id.fragment_container,                     
                         ResetPasswordFragment.newInstance(token))
            .commit();
        }
    }
}

 Call the handleIntent() method in onCreate():

handleIntent(getIntent());

Get the token in onCreate() method of ResetPasswordFragment:

@Override
public void onCreate(@Nullable Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);

    if (getArguments() != null)
        token = getArguments().getString(TOKEN_KEY);
}

Set the token in ViewModel:

if (token != null)
    resetPasswordViewModel.setToken(token);

The setToken() method in ViewModel:

if (token != null)
    resetPasswordViewModel.setToken(token);

LinkHandler class for handling the links:

package com.eventyay.organizer.utils;

public class LinkHandler {

    public Destination destination;
    public String token;

    public LinkHandler(Destination destination, String token) {
        this.destination = destination;
        this.token = token;
    }

    public static LinkHandler getDestinationAndToken(String url) {
        if (url.contains("reset-password")) {
            String token = url.substring(url.indexOf('=') + 1);
            return new LinkHandler(Destination.RESET_PASSWORD, token);
        } else if (url.contains("verify")) {
            String token = url.substring(url.indexOf('=') + 1);
            return new LinkHandler(Destination.VERIFY_EMAIL, token);
        } else
            return null;
    }

    public Destination getDestination() {
        return destination;
    }

    public String getToken() {
        return token;
    }

    public enum Destination {
        VERIFY_EMAIL,
        RESET_PASSWORD
    }
}

enum is used to handle links for both, password reset as well as email verification.

Finally, the unit tests for LinkHandler:

package com.eventyay.organizer.utils;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.JUnit4;

import static org.junit.Assert.assertEquals;

@RunWith(JUnit4.class)
public class LinkHandlerTest {

    private String resetPassUrl = "https://eventyay.com/reset-password?token=12345678";
    private String verifyEmailUrl = "https://eventyay.com/verify?token=12345678";

    @Test
    public void shouldHaveCorrectDestination() {
        assertEquals(LinkHandler.Destination.RESET_PASSWORD,
            LinkHandler.getDestinationAndToken(resetPassUrl).getDestination());
        assertEquals(LinkHandler.Destination.VERIFY_EMAIL,
            LinkHandler.getDestinationAndToken(verifyEmailUrl).getDestination());
    }

    @Test
    public void shouldGetPasswordResetToken() {
        assertEquals(LinkHandler.Destination.RESET_PASSWORD,
            LinkHandler.getDestinationAndToken(resetPassUrl).getDestination());
        assertEquals("12345678",
            LinkHandler.getDestinationAndToken(resetPassUrl).getToken());
    }

    @Test
    public void shouldGetEmailVerificationToken() {
        assertEquals(LinkHandler.Destination.VERIFY_EMAIL,
            LinkHandler.getDestinationAndToken(verifyEmailUrl).getDestination());
        assertEquals("12345678",
            LinkHandler.getDestinationAndToken(verifyEmailUrl).getToken());
    }
}

Resources:

Documentation: Link

Further reading: Android App Linking

Pull Request: feat: Add app link for password reset

Open Event Organizer App: Project repo, Play Store, F-Droid

Continue ReadingImplementation of Android App Links in Open Event Organizer App

Implementation of scanning in F-Droid build variant of Open Event Organizer Android App

Open Event Organizer App (Eventyay Organizer App) is the Android app used by event organizers to create and manage events on the Eventyay platform.

Various features include:

  1. Event creation.
  2. Ticket management.
  3. Attendee list with ticket details.
  4. Scanning of participants etc.

The Play Store build variant of the app uses Google Vision API for scanning attendees. This cannot be used in the F-Droid build variant since F-Droid requires all the libraries used in the project to be open source. Thus, we’ll be using this library: https://github.com/blikoon/QRCodeScanner 

We’ll start by creating separate ScanQRActivity, ScanQRView and activity_scan_qr.xml files for the F-Droid variant. We’ll be using a common ViewModel for the F-Droid and Play Store build variants.

Let’s start with requesting the user for camera permission so that the mobile camera can be used for scanning QR codes.

public void onCameraLoaded() {
    if (hasCameraPermission()) {
        startScan();
    } else {
        requestCameraPermission();
    }
}
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
    if (requestCode != PERM_REQ_CODE)
            return;

    // If request is cancelled, the result arrays are empty.
    if (grantResults.length > 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {
        cameraPermissionGranted(true);
    } else {
        cameraPermissionGranted(false);
    }
}



@Override
public boolean hasCameraPermission() {
    return ContextCompat.checkSelfPermission(this, permission.CAMERA) == PackageManager.PERMISSION_GRANTED;
}

@Override
public void requestCameraPermission() {
    ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.CAMERA}, PERM_REQ_CODE);
}


@Override
public void showPermissionError(String error) {
    Toast.makeText(this, error, Toast.LENGTH_SHORT).show();
}

public void cameraPermissionGranted(boolean granted) {
    if (granted) {
        startScan();
    } else {
        showProgress(false);
        showPermissionError("User denied permission");
    }
}

After the camera permission is granted, or if the camera permission is already granted, then the startScan() method would be called.

@Override
public void startScan() {
    Intent i = new Intent(ScanQRActivity.this, QrCodeActivity.class);
    startActivityForResult(i, REQUEST_CODE_QR_SCAN);
}

QrCodeActivity belongs to the library that we are using.

Now, the processing of barcode would be started after it is scanned. The processBarcode() method in ScanQRViewModel would be called.

public void onActivityResult(int requestCode, int resultCode, Intent intent) {

    if (requestCode == REQUEST_CODE_QR_SCAN) {
        if (intent == null)
            return;

        scanQRViewModel.processBarcode(intent.getStringExtra
            ("com.blikoon.qrcodescanner.got_qr_scan_relult"));

    } else {
        super.onActivityResult(requestCode, resultCode, intent);
    }
}

Let’s move on to the processBarcode() method, which is the same as the Play Store variant.

public void processBarcode(String barcode) {

    Observable.fromIterable(attendees)
        .filter(attendee -> attendee.getOrder() != null)
        .filter(attendee -> (attendee.getOrder().getIdentifier() + "-" + attendee.getId()).equals(barcode))
        .compose(schedule())
        .toList()
        .subscribe(attendees -> {
            if (attendees.size() == 0) {
                message.setValue(R.string.invalid_ticket);
                tint.setValue(false);
            } else {
                checkAttendee(attendees.get(0));
            }
        });
}

The checkAttendee() method:

private void checkAttendee(Attendee attendee) {
    onScannedAttendeeLiveData.setValue(attendee);

    if (toValidate) {
        message.setValue(R.string.ticket_is_valid);
        tint.setValue(true);
        return;
    }

    boolean needsToggle = !(toCheckIn && attendee.isCheckedIn ||
        toCheckOut && !attendee.isCheckedIn);

    attendee.setChecking(true);
    showBarcodePanelLiveData.setValue(true);

    if (toCheckIn) {
        message.setValue(
            attendee.isCheckedIn ? R.string.already_checked_in : R.string.now_checked_in);
        tint.setValue(true);
        attendee.isCheckedIn = true;
    } else if (toCheckOut) {
        message.setValue(
            attendee.isCheckedIn ? R.string.now_checked_out : R.string.already_checked_out);
        tint.setValue(true);
        attendee.isCheckedIn = false;
    }

    if (needsToggle)
        compositeDisposable.add(
            attendeeRepository.scheduleToggle(attendee)
                .subscribe(() -> {
                    // Nothing to do
                }, Logger::logError));
}

This would toggle the check-in state of the attendee.

Resources:

Library used: QRCodeScanner

Pull Request: feat: Implement scanning in F-Droid build variant

Open Event Organizer App: Project repo, Play Store, F-Droid

Continue ReadingImplementation of scanning in F-Droid build variant of Open Event Organizer Android App

API’s in SUSI.AI BotBuilder

In this blog, I’ll explain the different API’s involved in the SUSI.AI Bot Builder and its working. Now if you are wondering how the SUSI.AI bot builder works or how drafts are saved, then this is the perfect blog. I’ll be explaining the different API’s grouped by different API endpoints.

API Implementation

fetchChatBots

export function fetchChatBots() {
 const url = `${API_URL}/${CMS_API_PREFIX}/getSkillList.json`;
 return ajax.get(url, {
   private: 1,
 });
}

This API is used to fetch all your saved chatbots, which are displayed on the BotBuilder Page. The API endpoint is getSkillList.json. Same endpoint is used when a user creates a skill, the difference is query parameter private is passed which then returns your chatbots. Now if you are wondering why we have same endpoint for skills and chatbots, the simple plain reason for this is chatbots are your private skills.

fetchBotDetails

export function fetchBotDetails(payload) {
 const { model, group, language, skill } = payload;
 const url = `${API_URL}/${CMS_API_PREFIX}/getSkill.json`;
 return ajax.get(url, {
   model,
   group,
   language,
   skill,
   private: 1,
 });
}

This API is used to fetch details of bot/skill respectively from the API endpoint getSkill.json. Group name, language, skill name, private and model are passed as query parameters.

fetchBotImages

export function fetchBotImages(payload) {
 const { name: skill, language, group } = payload;
 const url = `${API_URL}/${CMS_API_PREFIX}/getSkill.json`;
 return ajax.get(url, {
   group,
   language,
   skill,
   private: 1,
 });
}

This API is used to fetch skill and bot images from the API endpoint getSkill.json. Group name, language, skill name and private are passed as query parameters.

uploadBotImage

export function uploadBotImage(payload) {
 const url = `${API_URL}/${CMS_API_PREFIX}/uploadImage.json`;
 return ajax.post(url, payload, {
   headers: { 'Content-Type': 'multipart/form-data' },
   isTokenRequired: false,
 });
}

This API is used to upload the Bot image to the API endpoint uploadImage.json.The Content-Type entity header is used to indicate the media type of the resource. multipart/form-data means no characters will be encoded. This is used when a form requires a binary data like the contents of a file or image to be uploaded.

deleteChatBot

export function deleteChatBot(payload) {
 const { group, language, skill } = payload;
 const url = `${API_URL}/${CMS_API_PREFIX}/deleteSkill.json`;
 return ajax.get(url, {
   private: 1,
   group,
   language,
   skill,
 });
}

This API is used to delete Skill and Bot from the API endpoint deleteSkill.json.

storeDraft

export function storeDraft(payload) {
 const { object } = payload;
 const url = `${API_URL}/${CMS_API_PREFIX}/storeDraft.json`;
 return ajax.get(url, { object });
}

This API is used to store draft Bot to the API endpoint storeDraft.json. The object passed as parameter has the properties given by the user such as skill name,group etc., while saving the draft.

readDraft

export function readDraft(payload) {
 const url = `${API_URL}/${CMS_API_PREFIX}/readDraft.json`;
 return ajax.get(url, { ...payload });
}

This API is used to fetch draft from the API endpoint readDraft.json. This API is called on the BotBuilder Page where all the saved drafts are shown.

deleteDraft

export function deleteDraft(payload) {
 const { id } = payload;
 const url = `${API_URL}/${CMS_API_PREFIX}/deleteDraft.json`;
 return ajax.get(url, { id });
}

This API is used to delete the saved Draft from the API endpoint deleteDraft.json. It only needs one query parameter i.e. the draft ID.

In conclusion, the above API’s are the backbone of the SUSI.AI Bot Builder. API endpoints in server ensure the user has the same experience across the clients. Do checkout implementation of different API endpoints in server here.

Resources

Continue ReadingAPI’s in SUSI.AI BotBuilder

Implementing a Chat Bubble in SUSI.AI

SUSI.AI now has a chat bubble on the bottom right of every page to assist you. Chat Bubble allows you to connect with susi.ai on just a click. You can now directly play test examples of skill on chatbot. It can also be viewed on full screen mode.

Redux Code

Redux Action associated with chat bubble is handleChatBubble. Redux state chatBubble can have 3 states:

  • minimised – chat bubble is not visible. This set state is set when the chat is viewed in full screen mode.
  • bubble – only the chat bubble is visible. This state is set when close icon is clicked and on toggle.
  • full – the chat bubble along with message section is visible. This state is set when minimize icon is clicked on full screen mode and on toggle.
const defaultState = {
chatBubble: 'bubble',
};
export default handleActions(
 {
  [actionTypes.HANDLE_CHAT_BUBBLE](state, { payload }) {
     return {
       ...state,
       chatBubble: payload.chatBubble,
     };
   },
 },
 defaultState,
);

Speech box for skill example

The user can click on the speech box for skill example and immediately view the answer for the skill on the chatbot. When a speech bubble is clicked a query parameter testExample is added to the URL. The value of this query parameter is resolved and answered by Message Composer. To be able to generate an answer bubble again and again for the same query, we have a reducer state testSkillExampleKey which is updated when the user clicks on the speech box. This is passed as key parameter to messageSection.

Chat Bubble Code

The functions involved in the working of chatBubble code are:

  • openFullScreen – This function is called when the full screen icon is clicked in the tab and laptop view and also when chat bubble is clicked in mobile view. This opens up a full screen dialog with message section. It dispatches handleChatBubble action which sets the chatBubble reducer state as minimised.
  • closeFullScreen – This function is called when the exit full screen icon is clicked. It dispatches a handleChatBubble action which sets the chatBubble reducer state as full.
  • toggleChat –  This function is called when the user clicks on the chat bubble. It dispatches handleChatBubble action which toggles the chatBubble reducer state between full and bubble.
  • handleClose – This function is called when the user clicks on the close icon. It dispatches handleChatBubble action which sets the chatBubble reducer state to bubble.
openFullScreen = () => {
   const { actions } = this.props;
   actions.handleChatBubble({
     chatBubble: 'minimised',
   });
   actions.openModal({
     modalType: 'chatBubble',
     fullScreenChat: true,
   });
 };

 closeFullScreen = () => {
   const { actions } = this.props;
   actions.handleChatBubble({
     chatBubble: 'full',
   });
   actions.closeModal();
 };

 toggleChat = () => {
   const { actions, chatBubble } = this.props;
   actions.handleChatBubble({
     chatBubble: chatBubble === 'bubble' ? 'full' : 'bubble',
   });
 };

 handleClose = () => {
   const { actions } = this.props;
   actions.handleChatBubble({ chatBubble: 'bubble' });
   actions.closeModal();
 };

Message Section Code (Reduced)

The message section comprises of three parts the actionBar, messageSection and the message Composer.

Action Bar

The actionBar consists of the action buttons – search, full screen, exit full screen and close button. Clicking on the search button expands and opens up a search bar. On clicking the full screen icon openFullScreen function is called which open up the chat dialog. On clicking the exit icon the handleClose function is called, which set chatBubble reducer state to bubble. On full screen view, clicking on the exit full screen icon calls the closeFullScreen functions which sets the reducer state chatBubble to full.

   const actionBar = (
     <ActionBar fullScreenChat={fullScreenChat}>
       {fullScreenChat !== undefined ? (
         <FullScreenExit onClick={this.closeFullScreen} width={width} />
       ) : (
         <FullScreen onClick={this.openFullScreen} width={width} />
       )}
       <Close onClick={fullScreenChat ? this.handleClose : this.toggleChat}/>
     </ActionBar>
   );

Message Section

The message section has two parts MessageList and Message Composer. Message List is where the messages are viewed and the Message composer allows you to interact with the bot through text and speech. ScrollBar is imported from the npm library react-custom-scrollbars. When the scroll bar is moved it sets the state of showScrollTop and showScrollBottom in the chat. messageListItems consists of all the messages between the user and the bot.


   const messageSection = (
     <MessageSectionContainer showChatBubble={showChatBubble} height={height}>
       {loadingHistory ? (
         <CircularLoader height={38} />
       ) : (
         <Fragment>
           {fullScreenChat ? null : actionBar}
           <MessageList
             ref={c => {
               this.messageList = c;
             }}
             pane={pane}
             messageBackgroundImage={messageBackgroundImage}
             showChatBubble={showChatBubble}
             height={height}>
             <Scrollbars
               renderThumbHorizontal={this.renderThumb}
               renderThumbVertical={this.renderThumb}
               ref={ref => {
                 this.scrollarea = ref;
               }}
               onScroll={this.onScroll}
               autoHide={false}>
               {messageListItems}
               {!search && loadingReply && this.getLoadingGIF()}
             </Scrollbars>
           </MessageList>
           {showScrollTop && (
             <ScrollTopFab
               size="small"
               backgroundcolor={body}
               color={theme === 'light' ? 'inherit' : 'secondary'}
               onClick={this.scrollToTop}
             >
               <NavigateUp />
             </ScrollTopFab>
           )}
           {showScrollBottom && (
             <ScrollBottomContainer>
               <ScrollBottomFab
                 size="small"
                 backgroundcolor={body}
                 color={theme === 'light' ? 'inherit' : 'secondary'}
                 onClick={this.scrollToBottom}>
                 <NavigateDown />
               </ScrollBottomFab>
             </ScrollBottomContainer>
           )}
         </Fragment>
       )}
       <MessageComposeContainer
         backgroundColor={composer}
         theme={theme}
         showChatBubble={showChatBubble}>
         <MessageComposer
           focus={!search}
           dream={dream}
           speechOutput={speechOutput}
           speechOutputAlways={speechOutputAlways}
           micColor={button}
           textarea={textarea}
           exitSearch={this.exitSearch}
           showChatBubble={showChatBubble}
         />
       </MessageComposeContainer>
     </MessageSectionContainer>
   );

 const Chat = (
     <ChatBubbleContainer className="chatbubble" height={height} width={width}>
       {chatBubble === 'full' ? messageSection : null}
       {chatBubble !== 'minimised' ? (
         <SUSILauncherContainer>
           <SUSILauncherWrapper
             onClick={width < 500 ? this.openFullscreen : this.toggleChat}>
             <SUSILauncherButton data-tip="Toggle Launcher" />
           </SUSILauncherWrapper>
         </SUSILauncherContainer>
       ) : null}
     </ChatBubbleContainer>
   );

Resources

Continue ReadingImplementing a Chat Bubble in SUSI.AI