Addition Of Track Tags In Open Event Android App

In the Open Event Android app we only had a list of sessions in the schedule page without  knowing which track each session belonged to. There were several iterations of UI design for the same. Taking cues from the Google I/O 17 App we thought that the addition of track tags would be informative to the user.  In this blog post I will be talking about how this feature was implemented.

TrackTag Layout in Session

<Button
                android:layout_width="wrap_content"
                android:layout_height="@dimen/track_tag_height"
                android:id="@+id/slot_track"
                android:textAllCaps="false"
                android:gravity="center"
                android:layout_marginTop="@dimen/padding_small"
                android:paddingLeft="@dimen/padding_small"
                android:paddingRight="@dimen/padding_small"
                android:textStyle="bold"
                android:layout_below="@+id/slot_location"
                android:ellipsize="marquee"
                android:maxLines="1"
                android:background="@drawable/button_ripple"
                android:textColor="@color/white"
                android:textSize="@dimen/text_size_small"
                tools:text="Track" />

The important part here is the track tag was modelled as a <Button/> to give users a touch feedback via a ripple effect.

Tag Implementation in DayScheduleAdapter

All the dynamic colors for the track tags was handled separately in the DayScheduleAdapter.

final Track sessionTrack = currentSession.getTrack();
       int storedColor = Color.parseColor(sessionTrack.getColor());
       holder.slotTrack.getBackground().setColorFilter(storedColor, PorterDuff.Mode.SRC_ATOP);

       holder.slotTrack.setText(sessionTrack.getName());
       holder.slotTrack.setOnClickListener(v -> {
            Intent intent = new Intent(context, TrackSessionsActivity.class);
            intent.putExtra(ConstantStrings.TRACK, sessionTrack.getName());

            intent.putExtra(ConstantStrings.TRACK_ID, sessionTrack.getId());
            context.startActivity(intent);
      });

As the colors associated with a track were all stored inside the track model we needed to obtain the track from the currentSession. We could get the storedColor by calling getColor() on the track object associated with the session.

In order to set this color as the background of the button we needed to set a color filter with a PorterDuff Mode called as SRC_ATOP.

The name of the parent class is an homage to the work of Thomas Porter and Tom Duff, presented in their seminal 1984 paper titled “Compositing Digital Images”. In this paper, the authors describe 12 compositing operators that govern how to compute the color resulting of the composition of a source (the graphics object to render) with a destination (the content of the render target).

Ref : https://developer.android.com/reference/android/graphics/PorterDuff.Mode.html

To exactly understand what does the mode SRC_ATOP do we can refer to the image below for understanding. There are several other modes with similar functionality.

               

      SRC IMAGE             DEST IMAGE           SRC ATOP DEST

Now we also set an onClickListener for the track tag so that it directs us to the tracks Page.

So now we have successfully added track tags to the app.

Resources

Addition Of Settings Page Animations in Open Event Android App

In order to bring in some uniform user flow in the  Open Event Android  there was a need to implement slide animation for various screens. It turned out that the settings page didn’t adhere to such a rule. In this blog post I’ll be highlighting how this was implemented in the app.

Android Animations

Animations for views/layouts in android were built in order to have elements of the app rely on real life motion and physics. These are mainly used to choreograph motion among various elements within a page or multiple pages. Here we would be implementing a simple slide_in and a slide_out animation for the settings page. Below I highlight how to write a simple animator.

slide_in_right.xml

<?xml version="1.0" encoding="utf-8"?>
<set

   xmlns:android="http://schemas.android.com/apk/res/android">
   <translate
       android:duration="@android:integer/config_shortAnimTime"
       android:fromXDelta="100%"
       android:toXDelta="0%" />

</set>

This xml file maily is what does the animation for us in the app. We just need to provide the translate element with the position fromXDelta from where the view would move to another location that would be toXDelta.

@android:integer/config_shortAnimTime is a default int variable for animation durations.

Here toXDelta=”0%” signifies that initial position of the page i.e the settings page that is the screen itself. And the 100% in fromXDelta signifies the virtual screen space to the right of the actual screen(OUTSIDE THE SCREEN OF THE PHONE).

Using this animation it appears that the screen is sliding into the view of the user from the right.

We will be using overridePendingTransition to override the default enter and exit animations for the activities which is fade_in and fade_out as of Android Lollipop.

The two integers you provide for overridePendingTransition(int enterAnim, int exitAnim) correspond to the two animations – removing the old Activity and adding the new one.

We have already defined the enterAnim here that is slide_in_left. For the exit animation we would define another animation that would be stay_in_place.xml that would have both fromXDelta and toXDelta as 0% as their values. This was done to just avoid any exit animation.

We need to add this line after the onCreate call within the SettingActivity.

@Override
 protected void onCreate(Bundle savedInstanceState) {

       //Some Code
       super.onCreate(savedInstanceState);
       overridePendingTransition(R.anim.slide_in_right,    R.anim.stay_in_place);
     //Some Code

Now we are done.

We can see below that the settings page animates with the prescribed animations below.

Resources

 

Global Search in Open Event Android

In the Open Event Android app we only had a single data source for searching in each page that was the content on the page itself. But it turned out that users want to search data across an event and therefore across different screens in the app. Global search solves this problem. We have recently implemented  global search in Open Event Android that enables the user to search data from the different pages i.e Tracks, Speakers, Locations etc all in a single page. This helps the user in obtaining his desired result in less time. In this blog I am describing how we implemented the feature in the app using JAVA and XML.

Implementing the Search

The first step of the work is to to add the search icon on the homescreen. We have done this with an id R.id.action_search_home.

@Override
public void onCreateOptionsMenu(Menu menu, MenuInflater inflater) {
   super.onCreateOptionsMenu(menu, inflater);
   inflater.inflate(R.menu.menu_home, menu);
   // Get the SearchView and set the searchable configuration
   SearchManager searchManager = (SearchManager)getContext().    getSystemService(Context.SEARCH_SERVICE);
   searchView = (SearchView) menu.findItem(R.id.action_search_home).getActionView();
  // Assumes current activity is the searchable activity
 searchView.setSearchableInfo(searchManager.getSearchableInfo(
getActivity().getComponentName()));
searchView.setIconifiedByDefault(true); 
}

What is being done here is that the search icon on the top right of the home screen  is being designated a searchable component which is responsible for the setup of the search widget on the Toolbar of the app.

@Override
 public boolean onCreateOptionsMenu(Menu menu) {
    MenuInflater inflater = getMenuInflater();
    inflater.inflate(R.menu.menu_home, menu);
    SearchManager searchManager =
            (SearchManager) getSystemService(Context.SEARCH_SERVICE);
    searchView = (SearchView) menu.findItem(R.id.action_search_home).getActionView();
    searchView.setSearchableInfo(
            searchManager.getSearchableInfo(getComponentName()));
 
    searchView.setOnQueryTextListener(this);
    if (searchText != null) {
        searchView.setQuery(searchText, true);
    }
    return true; }

We can see that a queryTextListener has been setup in this function which is responsible to trigger a function whenever a query in the SearchView changes.

Example of a Searchable Component

<?xml version="1.0" encoding="utf-8"?>
 <searchable xmlns:android="http://schemas.android.com/apk/res/android"
    android:hint="@string/global_search_hint"
    android:label="@string/app_name" />

For More Info : https://developer.android.com/guide/topics/search/searchable-config.html

If this searchable component is inserted into the manifest in the required destination activity’s body the destination activity is set and intent filter must be set in this activity to tell whether or not the activity is searchable.

Manifest Code for SearchActivity

<activity
        android:name=".activities.SearchActivity"
        android:launchMode="singleTop"
        android:label="Search App"
        android:parentActivityName=".activities.MainActivity">
    <intent-filter>
        <action android:name="android.intent.action.SEARCH" />
    </intent-filter>
    <meta-data
        android:name="android.app.searchable"
        android:resource="@xml/searchable" />
 </activity>

And the attribute  android:launchMode=”singleTop is very important as if we want to search multiple times in the SearchActivity all the instances of our SearchActivity would get stored on the call stack which is not needed and would also eat up a lot of memory.

Handling the Intent to the SearchActivity

We basically need to do a standard if check in order to check if the intent is of type ACTION_SEARCH.

if (Intent.ACTION_SEARCH.equals(getIntent().getAction())) {
    handleIntent(getIntent());
 }
@Override
 protected void onNewIntent(Intent intent) {
    super.onNewIntent(intent);
    handleIntent(intent);
 }
 public void handleIntent(Intent intent) {
    final String query = intent.getStringExtra(SearchManager.QUERY);
    searchQuery(query);
 }

The function searchQuery is called within handleIntent in order to search for the text that we received from the Homescreen.

SearchView Trigger Functions

Next we need to add two main functions in order to get the search working:

  • onQueryTextChange
  • onQueryTextSubmit

The function names are self-explanatory. Now we will move on to the code implementation of the given functions.

@Override
 public boolean onQueryTextChange(String query) {
    if(query!=null) {
        searchQuery(query);
        searchText = query;
    }
    else{
        results.clear();
        handleVisibility();
    }
   return true;
 }
 
 @Override
 public boolean onQueryTextSubmit(String query) {
    searchView.clearFocus();
    return true;
 }

The role of the searchView.clearFocus() inside the above code snippet is to remove the keyboard popup from the screen to enable the user to have a clear view of the search result.

Here the main search logic is being handled by the function called searchQuery which I’ll talking about now.

Search Logic

private void searchQuery(String constraint) {
 
    results.clear();
 
    if (!TextUtils.isEmpty(constraint)) {
        String query = constraint.toLowerCase(Locale.getDefault());
        addResultsFromTracks(query);
        addResultsFromSpeakers(query);
        addResultsFromLocations(query);
    }
 }
//THESE ARE SOME VARIABLES FOR REFERENCE
//This is the custom recycler view adapter that has been defined for the search
private GlobalSearchAdapter globalSearchAdapter;
 //This stores the results in an Object Array
 private List<Object> result

We are assuming that we have POJO’s(Plain Old Java Objects) for Tracks , Speakers , and Locations and for the Result Type Header.

The code posted below performs the function of getting the required results from the list of tracks. All the results are being fetched asynchronously from Realm and here we have also attached a header for the result type to denote whether the result is of type Track , Speaker or Location. We also see that we have added a changeListener to notify us if any changes have occurred in the set of results.

Similarly this is being done for all the result types that we need i.e Tracks, Locations and Speakers.

public void addResultsFromTracks(String queryString) {

   RealmResults<Track> filteredTracks = realm.where(Track.class)
                                        .like("name", queryString,                     Case.INSENSITIVE).findAllSortedAsync("name");
      filteredTracks.addChangeListener(tracks -> {

       if(tracks.size()>0){
            results.add("Tracks");
        }
       results.addAll(tracks);
       globalSearchAdapter.notifyDataSetChanged();
       Timber.d("Filtering done total results %d", tracks.size());
       handleVisibility();
});}

We now have a “Global Search” feature in the Open Event Android app. Users had asked for this feature and a learning for us is, that it would have been even better to do more tests with users when we developed the first versions. So, we could have included this feedback and implemented Global Search earlier on.

Resources

Integrating an Image Editing Page in Phimpme Android

The main aim of the Phimpme is to develop image editing and sharing application as an alternative to proprietary solutions like Instagram. Any user can choose a photo from the gallery or click a picture from the camera and upload it on the various social media platform including Drupal and wordpress. As most of the image editor applications in the app store currently my team and I discussed and listed down the basic functionality of the Image editing activity. We have listed down the following features for image Editing activity:

  • Filters.
  • Stickers
  • Image tuning

Choosing the Image Editing Application

There are number of existing Open Source projects that we went through to check how they could be integrated into Phimpme. We looked into those projects which are licensed under the  MIT Licence. As per the MIT Licence the user has the liberty to modify the use the code, modify it, merge, publish it without any restrictions. Image-Editor Android is one such application which has MIT Licence. The Image-Editor Android has extensive features for manipulating and enhancing the image. The features are as follows:

  • Edit Image by drawing on it.
  • Applying stickers on the image.
  • Applying filters.
  • Crop.
  • Rotating the image.
  • Text on the image.

It is an ideal application to be implemented in our project.

The basic flow of the application

First, getting the image either by gallery or camera. The team has implemented leafPic and openCamera. Second, redirecting the image from the leafPic gallery to the Image editing activity by choosing edit option from the popup menu.

Populating the Menu in the popup menu in XML:

<menu> tag is the root node, which contains ites in the popup menu. The following code is used to populate the menu:

<?xml version="1.0" encoding="utf-8"?>
<menu xmlns:android="http://schemas.android.com/apk/res/android">
    <item android:id="@+id/action_edit"
          android:icon="@drawable/ic_edit"
          android:title="@string/Edit"
          android:showAsAction="ifRoom"/>
    <item android:id="@+id/action_use_as"
          android:icon="@drawable/ic_use_as"
          android:title="@string/useAs" />
</menu>

Setting up the Image Editing Activity

Image-Editor Android application contains two main sections.

  • MainActivity (To get the image).
  • imageeditlibrary(To edit the image)

We need to import imageeditlibrary module. Android studios gives easy method to import a module from any other project using GUI. This can be done as follows: File->new->import module then choosing the module from the desired application.

Things to remember after importing any module from any other project:

  • Making sure that the minSdkVersion and targetSdkVersion in the gradle of the imported module and the current working project is same. In Phimpme the minSdkVersion is 16 and tagetSdkVersion is 25, which is used as standard SDK version.
  • Importing all the classes in the used in the imageeditlibrary module before using them in the leadPic gallery.

Sending Image to Image Editing Activity

This includes three tasks:

  • Handling onclick listeners.
  • Sending the image from the leafPic Activity
  • Receiving the the image in EditImageActivity.

Handling onClick Listener:

public boolean onOptionsItemSelected(MenuItem item) {
        switch (item.getItemId()) {
case R.id.action_edit:
// setOnclick listener here.
  }
}

Sending Image to EditImageActivity:

First we need to get the path of the image to be send. For this we need FileUtils class to handle the source address of the image. In FileUtils we use the function getEditFile().

public static File genEditFile(){
        return FileUtils.getEmptyFile("croppedImage"
                + System.currentTimeMillis() + ".png");
    }

Which calls the function getEmptyFile(String name):

public static File getEmptyFile(String name) {
        File folder = FileUtils.createFolders();
        if (folder != null) {
            if (folder.exists()) {
                File file = new File(folder, name);
                return file;
            }
        }
        return null;
    }

After getting the path of the file we need to send the path of the file to the EditImageActivity:

Uri uri = Uri.fromFile(new File(getAlbum().getCurrentMedia().getPath()));
                File outputFile = FileUtils.genEditFile();
                EditImageActivity.start(this,String.valueOf(uri),outputFile.getAbsolutePath(),ACTION_REQUEST_EDITIMAGE);

Receiving the image in EdtiImageActivity:

This is done by calling getdata() function in onCreate function.

private void getData() {
        filePath = getIntent().getStringExtra(FILE_PATH);
        saveFilePath = getIntent().getStringExtra(EXTRA_OUTPUT);
        loadImage(filePath);
    }

EditImageActivity Layout:

Conclusion

When integrating files from another activity we have to keep the API version of both the projects same. The best way to send an image to another activity is to save the image internally and then call the image path from the other activity.

Twitter Oauth

Oauth_logo.svg.png

What is Oauth?

It’s an open protocol which allows to secure an authorization in a simple and standard method from web, mobile and desktop applications.Facebook, Google Twitter, Github and more web services use this protocol to authenticate user. Using Oauth is very convenient, because it delegates user authentication to the service which host user account. It allows us to get resources from another web service without giving any login or password. If you have a service and want to prepare a authentication via Twitter, the best solution is to use OAuth. Recently Open Event team met a problem in an user profile page. We’d like to automatically fill information about user. Of course, to solve it we use Oauth protocol, to authenticate with Twitter After a three-steps authentication we can get name and profile picture.If you need another information from Twitter profile like recent tweets or followers’ list. You have to visit Twitter API site to see more samples of resource which you can get

How do Open event team implement communication between Orga-server and Twitter?

All services have a very similar flow. Below i will show you how it looks in our case.

Before starting you need to create your own twitter app. You can create app in Twitter apps site https://apps.twitter.com/. If  create an app you will see a CONSUMER KEY and CONSUMER SECRET KEY which shouldn’t be human-readable, so remember not to share these keys.

Below example shows how to get basic information about twitter profile

We use oauth2 python library https://github.com/joestump/python-oauth2

consumer = oauth2.Consumer(key=TwitterOAuth.get_client_id(),

                          secret=TwitterOAuth.get_client_secret())

client = oauth2.Client(consumer)

TwitterOAuth.get_client_id() CONSUMER KEY

TwitterOAuth.get_client_secret()  – CONSUMER SECRET KEY

Then we send GET request to request_token endpoint to get oauth_token

client.request('https://api.twitter.com/oauth/request_token', "GET")
Response: oauth_token=Z6eEdO8MOmk394WozF5oKyuAv855l4Mlqo7hhlSLik&
oauth_token_secret=Kd75W4OQfb2oJTV0vzGzeXftVAwgMnEK9MumzYcM&
oauth_callback_confirmed=true

Next step is to redirect user to Twitter Authentication site

twitter-oauth.png

You can see in an url a redirect_uri. So after sign in Client will get a callback from Twitter with oauth_verifier and oauth_token params

https://api.twitter.com/oauth/authenticate?oauth_token=RcYGqAAAAAAAwdbbAAABVoM1UMo&oauth_token_secret=wZfpPpCugAmdF3AuohEnvBTRxdCmllxu&oauth_callback_confirmed=true&redirect_uri=http://open-event-dev.herokuapp.com/tCallback

The last step is to get an access token. If we have an oauth_verifier and an oauth_token it’s pretty easy

def get_access_token(self, oauth_verifier, oauth_token):

   consumer = self.get_consumer()

   client = oauth2.Client(consumer)

   return client.request(

       self.TW_ACCESS_TOKEN_URI + 'oauth_verifier=' + oauth_verifier + 
       "&oauth_token=" + oauth_token, "POST")

Where TW_ACCESS_TOKEN_URI is https://api.twitter.com/oauth/access_token

Final step is to get our user information

resp, content = client.request("https://api.twitter.com/1.1/users/show.json?
                               screen_name=" + access_token["screen_name"] +
                               "&user_id=" + access_token["user_id"] , "GET")

user_info = json.loads(content)

In an user_info variable you can get a profile picture or a profile name.

Summarizing, oauth protocol is very secure and easy to use by developer. At the beginning an oauth flow can seem to be a little hard to  understand but if you spend some time trying tp understand it, everything becomes easier.  And it’s secured. because you don’t need to store a login or a password, and an access token has an expired time. This is the main feature of Oauth protocol.

Unit Testing

There are many stories about unit testing. Developers sometimes say that they don’t write tests because they write a good quality code. Does it make sense, if no one is infallible?.

At studies only a  few teachers talk about unit testing, but they only show basic examples of unit testing. They require to write a few tests to finish final project, but nobody really  teaches us the importance of unit testing.

I have also always wondered what benefits can it bring. As time is a really important factor in our work it often happens that we simply resign of this part of process development to get “more time” rather than spend time on writing stupid tests. But now I know that it is a vicious circle.

Customers requierments does not help us. They put a high pressure to see visible results not a few statistics about coverage status. None of them cares about some strange numbers. So, as I mentioned above, we usually focuses on building new features and get riid of tests. It may seem to save time, but it doesn’t.

In reality tests save us a lot of time because we can identify and fix bugs very quickly. If a bug ocurrs because someone’s change we don’t have to spend long hours trying to figure out wgat is going out. That’s why we need tests.  

It is especially visible in huge open source projects. FOSSASIA organization has about 200 contributors. In OpenEvent project we have about 20 active developers, who generate many lines of code every single day. Many of them change over and over again as well as interfere  with each other.

Let me provide you with a simple example. In our team we have about 7 pull requests per day. As I mentioned above we want to make our code high quality and free of bugs, but without testing identifying if pull request causes a bug is very difficult task. But fortunately this boring job makes Travis CI for us. It is a great tool which uses our tests and runs them on every PR  to check if bugs occur. It helps us to quickly notice bugs and maintain our project very well.

What is unit testing?

Unit testing is a software development method in which the smallest testable parts of an application are tested

Why do we need writing unit tests?

Let me point all arguments why unit testing is really important while developing a project.

  • To prove that our code works properly

If developer adds another condition, test checks if method returns correct results. You simply don’t need to wonder if something is wrong with you code.

  • To reduce amount of bugs

It let you to know what inputs params’ function should get and what results should be returned. You simply don’t  write unused code

  • To save development time

Developers don’t waste time on checking every code’s change if his code works correctly

  • Unit tests help to understand software design
  • To provide quick feedback about method which you are testing
  • To help document a code

How to write unit test in Python

In my work I write use tests in Python. I am going to share my sample code  with you now

  • Import module unittest
  • Choose function to test
  • Write unit test

Example OpenEvent test in Python

class TestPagesUrls(OpenEventTestCase):

   def setUp(self):

       self.app = Setup.create_app()

   def test_if_urls_exist(self):

       """Test all urls via GET method"""

       with app.test_request_context():

           for rule in app.url_map.iter_rules():

               if excluded_paths(rule):

                   status_code = self.app.get(request.url[:-1] + str(rule).replace('//', '/'),        follow_redirects=True).status_code

                   self.assertTrue(status_code in [200, 302, 401])

 

I want to check if all views exist but it required a lot of time. That’s why I wonder I how to avoid writing similar tests. Finally, based  on our list of routes I am able to write test which checks code’s status  on every page.

If some of them response returns status_code different than 200, 302 or 401, test fails.This results means that somethings is wrong. Simple, isn’t it ?  Try to test it manually…. This one short test cover about 40 use cases…

This example shows an incredible value of unit tests! If developer makes a bug in response he receives an error that something is wrong with a view. Travis CI allows to reject all  wrong pull requests and merge only these which fulfill our quality requirements.   

Fixing  error is one part but finding a bug is even harder task. But an ability to detect bug on early stage of process development reduces cost of software.

 

Participate in FOSSASIA Summit 2016 in Science Center Singapore, March 18th-20th

Please join us at FOSSASIA 2016 in Singapore, the premier Open Technology event in Asia.

The event will take place from March 18-20 at the Singapore Science Center and already on 17th March the pgDay Asia conference is part of the pre-event activities.

The FOSSASIA weekend from Friday to Sunday is dedicated to the “Internet of Things and Me” covering open technologies and software that make todays connected devices run. In workshops kids can start learning with the Pocket Science Lab. In the Science Hack track attendees will learn how to participate in the Citizen Science community. Please:

More than 120 speakers from Asia and around the world will join the event from communities and companies such as Google, RedHat, and Github. There will be talks and hands on workshops on topics including:

  • Open Hardware, Makers, Internet of Things
  • Open Source Software, Data and Free Knowledge
  • DevOps, Docker, Programming languages, Python, Go, and more
  • Science Hacks and Open Design
  • Tech and Science for Kids

Info on the FOSSASIA Summit 2016 at the Event Website

Read the Call for Speakers here.

Join the FOSSASIA Meetup Group in Singapore and reserve your spot in workshops as soon as they are announced.

Follow us on Twitter.

Check out the photos from last year on Flickr.