Adding Audio Streaming from Youtube in SUSI Linux

In this blog post we will describe how the youtube streaming works in the

SUSI smart speaker and how audio is streamed directly from youtube videos.

To achieve this process, we have used an amazing Open-Source project called MPV music Player along with python libraries like Subprocess.

1.Processing a Query to the server

Firstly , the user asks the smart speaker to play the youtube audio by simply adding a ‘play’ word before his/her favorite song. eg. I’ll say ‘play despacito’ and then the command is recognized and a query is sent to the server which sends the following response as a JSON object.

“actions”: [
     {
       “type”: “answer”,
       “expression”: “Playing Luis Fonsi – Despacito ft. Daddy Yankee”
     },
     {
       “identifier”: “kJQP7kiw5Fk”,
       “identifier_type”: “youtube”,
       “type”: “video_play”
     }]

2.Parsing the response

Then the speaker parses the response in the following way.

The Speaker traverses through all the actions returned in the response and checks for all the “identifier” by assigning a custom class to it.

class VideoAction(BaseAction):
   def __init__(self, identifier , identifier_type):
       super().__init__()
       self.identifier = identifier
       self.identifier_type = identifier_type

Now we check whether the query is the type of a custom class VideoAction and then the client processes the query as the response.

      elif isinstance(action, VideoAction):
          result[‘identifier’] = action.identifier
           audio_url = result[‘identifier’]  

3.Implementing the Actions

Now that we have identified that the response contains a Video Action, we can finally implement a way to play the audio from the URL.
We use a music player called MPV Music Player and the library Subprocess to make it run asynchronously.

if ‘identifier’ in reply.keys():
   classifier = reply[‘identifier’]
   if classifier[:3] == ‘ytd’:
       video_url = reply[‘identifier’]
       video_pid = subprocess.Popen(‘mpv –no-video https://www.youtube.com/watch?v={} –really-quiet &’.format(video_url[4:]), shell=True)  # nosec #pylint-disable type: ignore
       self.video_pid = video_pid.pid


This is how audio is streamed from youtube videos in SUSI Smart Speaker.

Resources

  1. https://github.com/mpv-player/mpv
  2. https://docs.python.org/2/library/subprocess.html
  3. https://github.com/fossasia/susi_linux
  4. https://github.com/fossasia/susi_api_wrapper

Tags

fossasia, gsoc’18, susi, susi.ai, youtube, music, mp3 , mpv, audio stream

 

Continue Reading

Adding Offline support To SUSI Linux

Till now, SUSI smart speaker was working only as an online model like the other speakers in the market. For the first time, we have introduced a feature which allows the speaker to work offline. We deployed the server on the hardware itself and also provide the option of an online server as a fallback.

 

The Offline Support was implemented in the following steps

 

Step 1: Deploying SUSI Server Locally

 

Firstly , configure a bash script to allow automatic deployment of the server along with the initialization of the susi_linux script.

 

echo “Deploying local server”
if  [ ! -e “susi-server” ]
then
   git clone https://github.com/fossasia/susi_server.git
fi

if [ -e “susi_server” ]
then    
   cd susi_server
   git submodule update –recursive –remote
   git submodule update –init –recursive
   ./gradlew build
   bin/start.sh
fi 

 

The above builds the server and deploys it on ‘localhost:4000’.

 

Then, add the following test on SUSI Linux wrapper to check if the local server is up and running. Using the local server not adds an offline support but also increases the efficiency by around 30%.

def check_local_server():
   test_params = {
       ‘q’: ‘Hello’,
       ‘timezoneOffset’: int(time.timezone / 60)
   }
   try:
       chat_url = ‘http://localhost:4000/susi/chat.json’
       if (requests.get(chat_url, test_params)):
           print(‘connected to local server’)
           global api_endpoint
           api_endpoint = ‘http://localhost:4000’
   except requests.exceptions.ConnectionError:
       print(‘local server is down’)


check_local_server()

 

As shown above, this is a test checking for the local server. If the local server is down, the online server is chosen as a fallback

 

Step 2: Adding an Offline STT Service

Now, that we are able to process a query offline. We must have a way in which, we can recognize the user’s voice commands without using the internet. For that, we use the service of PocketSphinx. But first, we check if the internet is available or not

 

def internet_on():
       try:
           urllib2.urlopen(‘http://216.58.192.142’, timeout=1)  # nosec #pylint-disable type: ignore
           return True  # pylint-enable
       except urllib2.URLError as err:
           print(err)
           return False

 

If the internet connection is available, we use the online STT service which is Google STT ( default) and switch over to PocketSphinx in case the internet connection is not available.

 

Step 3: Adding the Offline TTS service

Finally, we’ll need an offline TTS service which will help us turn SUSI’s response to voice commands. We’ll be using a service called flite TTS as our offline TTS.

 

elif payload == ‘ConnectionError’:
            self.notify_renderer(‘error’, ‘connection’)                                  self.notify_renderer(‘error’, ‘connection’)
            config[‘default_tts’] = ‘flite’
            os.system(‘play extras/connect-error.wav’)              

 

We check if there is a ConnectionError, and then we switch to flite TTS after play an error query

 

Final Output:

We now get a Smart Speaker which is functional without any internet connection.

 

References

Tags

 

Fossasia, susi, gsoc, gsoc’18, offline_tts , offline_stt ,flite , pocketsphinx

Continue Reading

Creating a Media Daemon for SUSI Smart Speaker

A daemon in reference of operating systems is a computer program that runs as a background process rather than under direct control of the user. Various daemons are being used in SUSI smart speaker.

The following features have been created

  • Update Daemon
  • Media Discovery Daemon
  • Factory Reset Daemon

In this blog, we’ll be discussing the implementation of the Media Discovery Daemon

Media Discovery Daemon:

The SUSI Smart speaker will have an essential feature which will allow the Users to play music from their USB devices. Hence , a media daemon will be running which will detect a USB connection and then scan it’s contents checking for all the mp3 files and then create custom SUSI skills to allow SUSI Smart Speaker to play music from your USB device.

 

The Media Daemon was implemented in the following steps

1.UDEV Rules

We had to figure out a way to run our daemon as soon as the user inserted the USB storage and stop the daemon as soon as the USB storage was removed

 

So, we used UDEV rules to trigger the Media Daemon.

 

ACTION==“add”, KERNEL==“sd?”, SUBSYSTEM==“block”, ENV{ID_BUS}==“usb”, RUN=“/home/pi/SUSI.AI/susi_linux/media_daemon/autostart.sh”ACTION==“remove, KERNEL==“sd?”, SUBSYSTEM==“block”, ENV{ID_BUS}==“usb”, RUN=“/home/pi/SUSI.AI/susi_linux/media_daemon/autostop.sh”

The Udev rules trigger a script called ‘autostart.sh’  on USB detection and a script called ‘autostop.sh’ on USB removal.

2. Custom Skill Creation

As the USB connection is now detected ,a script is triggered which checks the presence of a  local SUSI server in the repo. If a local server instance is detected,a python script is triggered which parses through the USB mount point and checks for the list of mp3 files present in the storage device and then create a custom skill file in the local server instance.

 

media_daemon_folder = os.path.dirname(os.path.abspath(__file__))
base_folder = os.path.dirname(media_daemon_folder)
server_skill_folder = os.path.join(base_folder, ‘susi_server/susi_server/data/generic_skills/media_discovery’)
server_settings_folder = os.path.join(base_folder, ‘susi_server/susi_server/data/settings’)

def make_skill(): # pylint-enable
   name_of_usb = get_mount_points()
   print(type(name_of_usb))
   print(name_of_usb[0])
   x = name_of_usb[0]
   os.chdir(‘{}’.format(x[1]))
   USB = name_of_usb[0]
   mp3_files = glob(“*.mp3”)
   f = open( media_daemon_folder +‘/custom_skill.txt’,‘w’)
   music_path = list()
   for mp in mp3_files:
       music_path.append(“{}”.format(USB[1]) + “/{}”.format(mp))

   song_list = ” “.join(music_path)
   skills = [‘play audio’,‘!console:Playing audio from your usb device’,‘{“actions”:[‘,‘{“type”:”audio_play”, “identifier_type”:”url”, “identifier”:”file://’+str(song_list) +‘”}’,‘]}’,‘eol’]
   for skill in skills:
       f.write(skill + ‘\n’)
   f.close()
   shutil.move( media_daemon_folder + ‘custom_skill.txt’, server_skill_folder)
   f2 = open(server_settings_folder + ‘customized_config.properties’,‘a’)
   f2.write(‘local.mode = true’)
   f2.close()

def get_usb_devices():
   sdb_devices = map(os.path.realpath, glob(‘/sys/block/sd*’))
   usb_devices = (dev for dev in sdb_devices
       if ‘usb’ in dev.split(‘/’)[5])
   return dict((os.path.basename(dev), dev) for dev in usb_devices)

def get_mount_points(devices=None):
   devices = devices or get_usb_devices() # if devices are None: get_usb_devices
   output = check_output([‘mount’]).splitlines() #nosec #pylint-disable type: ignore
   output = [tmp.decode(‘UTF-8’) for tmp in output ] # pytlint-enable
   def is_usb(path):
       return any(dev in path for dev in devices)
   usb_info = (line for line in output if is_usb(line.split()[0]))
   return [(info.split()[0], info.split()[2]) for info in usb_info] 

 

Now a custom skill file will be created in the local server instance by the name of `custom_skill.txt` and the user can play audio from USB by speaking the command ‘play audio’

 

3. Preparing for the Next USB insertion

Now if the User wants to update his/her music library or wants to use another USB storage device. The USB will be removed and hence the custom skill file is also deleted from the script ‘autstop.sh’ which is triggered via the UDEV rules

#! /bin/bash

SCRIPT_PATH=$(realpath $0)
DIR_PATH=$(dirname $SCRIPT_PATH)

cd $DIR_PATH/../susi_server/susi_server/data/generic_skills/media_discovery/

sudo rm custom_skill.txt  

 

This is how the Media Discovery Daemon works in SUSI Smart Speaker

 

References

Tags

gsoc, gsoc’18 , fossasia, susi.ai, smart speaker, media daemon, susi skills

Continue Reading

Add Unit Test in SUSI.AI Android App

Unit testing is an integral part of software development. Hence, this blog focuses on adding unit tests to SUSI.AI Android app. To keep things simple, take a very basic example of anonymize feedback section. In this section the email of the user is truncated after ‘@’ symbol in order to maintain the anonymity of the user. Here is the function that takes ‘email’ as a parameter and returns the truncated email that had to be displayed in the feedback section :

fun truncateEmailAtEnd(email: String?): String? {
   if (!email.isNullOrEmpty()) {
       val truncateAt = email?.indexOf('@')
       if (truncateAt is Int && truncateAt != -1) {
           return email.substring(0, truncateAt.plus(1)) + " ..."
       }
   }
   return null
}

 

The unit test has to be written for the above function.

Step – 1 : Add the following dependencies to your build.gradle file.

//unit test
testImplementation "junit:junit:4.12"
testImplementation "org.mockito:mockito-core:1.10.19"

 

Step – 2 : Add a file in the correct package (same as the file to be tested) in the test package. The function above is present in the Utils.kt file. Thus create a file, called UtilsTest.kt, in the test folder in the package org.fossasia.susi.ai.helper’.

Step – 3 : Add a method, called testTruncateEmailAtEnd(), to the UtilsTest.kt and add ‘@Test’ annotation to before this method.

Step – 4 : Now add tests for various cases, including all possible corner cases that might occur. This can be using assertEquals() which takes in two paramters – expected value and actual value.

For example, consider an email ‘[email protected]’. This email is passed as a parameter to the truncateAtEnd() method. The expected returned string would be ‘[email protected] …’. So, add a test for this case using assertEquals() as :

assertEquals("[email protected] ...", Utils.truncateEmailAtEnd("[email protected]"))

 

Similary, add other cases, like empty email string, null string, email with numbers and symbols and so on.

Here is how the UtilsTest.kt class looks like.

package org.fossasia.susi.ai.helper

import junit.framework.Assert.assertEquals
import org.junit.Test

class UtilsTest {
   @Test
   fun testTruncateEmailAtEnd() {
       assertEquals("[email protected] ...", Utils.truncateEmailAtEnd("[email protected]"))
       assertEquals(null, Utils.truncateEmailAtEnd("testuser"))
       assertEquals(null, Utils.truncateEmailAtEnd(""))
       assertEquals(null, Utils.truncateEmailAtEnd(" "))
       assertEquals(null, Utils.truncateEmailAtEnd(null))
       assertEquals("[email protected] ...", Utils.truncateEmailAtEnd("[email protected]"))
       assertEquals("[email protected] ...", Utils.truncateEmailAtEnd("[email protected]"))
       assertEquals("[email protected] ...", Utils.truncateEmailAtEnd("[email protected]"))
       assertEquals(null, Utils.truncateEmailAtEnd("test [email protected]"))
   }
}

 

Note: You can add more tests to check for other general and corner cases.

Step – 5 : Run the tests in UtilsTest.kt.

If all the test cases pass, then the tests pass. But, if the tests fail, try to figure out the cause of failure of the tests and add/modify the code in the Utils.kt accordingly. This approach helps recognize flaws in the existing code thereby reducing the risk of bugs and failures.

Resources

Continue Reading

Showing skills based on different metrics in SUSI Android App using Nested RecyclerViews

SUSI.AI Android app had an existing skills listing page, which displayed skills under different categories. As a result, there were a number of API calls at almost the same time, which led to slowing down of the app. Thus, the UI of the Skill Listing page has been changed so as to reduce the number of API calls and also to make this page more useful to the user.

API Information

For getting a list of SUSI skills based on various metrics, the endpoint used is /cms/getSkillMetricsData.json

This will give you top ten skills for each metric. Some of the metrics include skill ratings, feedback count, etc. Sample response for top skills based on rating :

"rating": [
  {
    "model": "general",
    "group": "Knowledge",
    "language": "en",
    "developer_privacy_policy": null,
    "descriptions": "A skill to tell atomic mass and elements of periodic table",
    "image": "images/atomic.png",
    "author": "Chetan Kaushik",
    "author_url": "https://github.com/dynamitechetan",
    "author_email": null,
    "skill_name": "Atomic",
    "protected": false,
    "reviewed": false,
    "editable": true,
    "staffPick": false,
    "terms_of_use": null,
    "dynamic_content": true,
    "examples": ["search for atomic mass of radium"],
    "skill_rating": {
      "bookmark_count": 0,
      "stars": {
        "one_star": 0,
        "four_star": 3,
        "five_star": 8,
        "total_star": 11,
        "three_star": 0,
        "avg_star": 4.73,
        "two_star": 0
      },
      "feedback_count": 3
    },
    "usage_count": 0,
    "skill_tag": "atomic",
    "supported_languages": [{
      "name": "atomic",
      "language": "en"
    }],
    "creationTime": "2018-07-25T15:12:25Z",
    "lastAccessTime": "2018-07-30T18:50:41Z",
    "lastModifiedTime": "2018-07-25T15:12:25Z"
  },
  .
  .

]

 

Note : The above response shows only one of the ten objects. There will be ten such skill metadata objects inside the “rating” array. It contains all the details about skills.

Implementation in SUSI.AI Android App

Skill Listing UI of SUSI SKill CMS

Skill Listing UI of SUSI Android App

The UI of skills listing in SUSI Android app displays skills for each metric in a horizontal recyclerview, nested in a vertical recyclerview. Thus, for implementing horizontal recyclerview inside vertical recyclerview, you need two viewholders and two adapters (one each for a recyclerview). Let us go through the implementation.

  • Make a query object consisting of the model and language query parameters that shall be passed in the request to the server.

val queryObject = SkillMetricsDataQuery("general", 
PrefManager.getString(Constant.LANGUAGE,Constant.DEFAULT))

 

  • Fetch the skills based on metrics, by calling fetch in SkillListModel which then makes an API call to fetch groups.

skillListingModel.fetchSkillsMetrics(queryObject, this)

 

  • When the API call is successful, the below mentioned method is called which in turn parses the received response and updates the adapter to display the skills based on different metrics.

override fun onSkillMetricsFetchSuccess(response: Response<ListSkillMetricsResponse>) {
   skillListingView?.visibilityProgressBar(false)
   if (response.isSuccessful && response.body() != null) {
       Timber.d("METRICS FETCHED")
       metricsData = response.body().metrics
       if (metricsData != null) {
           metrics.metricsList.clear()
           metrics.metricsGroupTitles.clear()
           if (metricsData?.rating != null) {
               if (metricsData?.rating?.size as Int > 0) {
                   metrics.metricsGroupTitles.add(utilModel.getString(R.string.metric_rating))
                   metrics.metricsList.add(metricsData?.rating)
                   skillListingView?.updateAdapter(metrics)
               }
           }

           if (metricsData?.usage != null) {
               if (metricsData?.usage?.size as Int > 0) {
                   metrics.metricsGroupTitles.add(utilModel.getString(R.string.metric_usage))
                   metrics.metricsList.add(metricsData?.usage)
                   skillListingView?.updateAdapter(metrics)
               }
           }

           if (metricsData?.newest != null) {
               val size = metricsData?.newest?.size
               if (size is Int) {
                   if (size > 0) {
                       metrics.metricsGroupTitles.add(utilModel.getString(R.string.metric_newest))
                       metrics.metricsList.add(metricsData?.newest)
                       skillListingView?.updateAdapter(metrics)
                   }
               }
           }

           if (metricsData?.latest != null) {
               if (metricsData?.latest?.size as Int > 0) {
                   metrics.metricsGroupTitles.add(utilModel.getString(R.string.metric_latest))
                   metrics.metricsList.add(metricsData?.latest)
                   skillListingView?.updateAdapter(metrics)
               }
           }

           if (metricsData?.feedback != null) {
               if (metricsData?.feedback?.size as Int > 0) {
                   metrics.metricsGroupTitles.add(utilModel.getString(R.string.metric_feedback))
                   metrics.metricsList.add(metricsData?.feedback)
                   skillListingView?.updateAdapter(metrics)
               }
           }

           if (metricsData?.topGames != null) {
               val size = metricsData?.feedback?.size
               if (size is Int) {
                   if (size > 0) {
                       metrics.metricsGroupTitles.add(utilModel.getString(R.string.metrics_top_games))
                       metrics.metricsList.add(metricsData?.topGames)
                       skillListingView?.updateAdapter(metrics)
                   }
               }
           }

           skillListingModel.fetchGroups(this)
       }
   } else {
       Timber.d("METRICS NOT FETCHED")
       skillListingView?.visibilityProgressBar(false)
       skillListingView?.displayError()
   }
}

 

  • When skills are fetched, the data in adapter is updated using skillMetricsAdapter.notifyDataSetChanged()

override fun updateAdapter(metrics: SkillsBasedOnMetrics) {
   swipe_refresh_layout.isRefreshing = false
   if (errorSkillFetch.visibility == View.VISIBLE) {
       errorSkillFetch.visibility = View.GONE
   }
   skillMetrics.visibility = View.VISIBLE
   this.metrics.metricsList.clear()
   this.metrics.metricsGroupTitles.clear()
      this.metrics.metricsList.addAll(metrics.metricsList)
   this.metrics.metricsGroupTitles.addAll(metrics.metricsGroupTitles)
      skillMetricsAdapter.notifyDataSetChanged()
}

 

  • The data is set to the layout in two adapters made earlier. The following is the code to set the title for the metric and adapter to horizontal recyclerview. This is the SkillMetricsAdapter to set data to show item in vertical recyclerview.

override fun onBindViewHolder(holder: RecyclerView.ViewHolder, position: Int) {
   if (metrics != null) {
       if (metrics.metricsList[position] != null) {
           holder.groupName?.text = metrics.metricsGroupTitles[position]
       }

       skillAdapterSnapHelper = StartSnapHelper()
       holder.skillList?.setHasFixedSize(true)
       val mLayoutManager = LinearLayoutManager(context, LinearLayoutManager.HORIZONTAL, false)
       holder.skillList?.layoutManager = mLayoutManager
       holder.skillList?.adapter = SkillListAdapter(context, metrics.metricsList[position], skillCallback)
       holder.skillList?.onFlingListener = null
       skillAdapterSnapHelper.attachToRecyclerView(holder.skillList)
   }
}

 

Continue Reading

Show skills image in Circular Image View in SUSI.AI Android app

Each SUSI.AI skill has some data like skill name, skill image, skill rating and so on. Some of the skills image have a square appearance while others have a circular appearance and so on. This blog shows how to transform all images to circular image view while setting the skill image in the appropriate view holder in the skills card using Picasso.

Step – 1 : Create a new helper class called CircleTransform.java that implements the Transformation interface from Picasso.

Step -2 : Override the transform and key methods.

Step – 3 : Create a Bitmap and perform the following steps inside the transform() method, as mentioned in the code below :

@Override
public Bitmap transform(Bitmap source) {
   int size = Math.min(source.getWidth(), source.getHeight());
   int x = (source.getWidth() - size) / 2;
   int y = (source.getHeight() - size) / 2;

   Bitmap squaredBitmap = Bitmap.createBitmap(source, x, y, size, size);
   if (!squaredBitmap.equals(source)) {
       source.recycle();
   }

   Bitmap bitmap = Bitmap.createBitmap(size, size, source.getConfig());

   Canvas canvas = new Canvas(bitmap);
   Paint paint = new Paint();
   BitmapShader shader = new BitmapShader(squaredBitmap, BitmapShader.TileMode.CLAMP, BitmapShader.TileMode.CLAMP);
   paint.setShader(shader);
   paint.setAntiAlias(true);

   float radius = size / 2f;
   canvas.drawCircle(radius, radius, radius, paint);

   squaredBitmap.recycle();
   return bitmap;
}

 

This method returns a bitmap that we shall use to add to the appropriate view holder.

Step – 4 : Also return a string called “circle” from the key() method.

@Override
public String key() {
   return "circle";
}

 

Step – 5 : Now, add this transformation to the code, where the skill image is set into the appropriate view holder using Picasso.

fun setSkillsImage(skillData: SkillData, imageView: ImageView) {
   Picasso.with(imageView.context)
           .load(getImageLink(skillData))
           .error(R.drawable.ic_susi)
           .transform(CircleTransform())
           .fit()
           .centerCrop()
           .into(imageView)
}

 

Now, all skill images will be circular, as can be seen in the following screenshot :

 .     

The first image shows the skills image before applying CircleTransform while the second image shows the same after applying it.

Resources

Continue Reading

Different Text Color On Each Line In Badgeyay

In this blog post I am going to explain about how to create different text color for each line in badges generation in Badgeyay. As the system now has option for different badge size and paper size, currently the system sets same color for each line by mutating the fill parameter in the SVG. The main challenge in mutating the SVG parameter for each badge is the Id. The ID identifies the element, in our case text, and gives access to iterate the SVG through libraries like lxml. So for implementing this feature we first need to manipulate the SVG and assign id’s to the text tag so that it can be easily manipulated through the algorithm.

Procedure

  1. Manipulating the text tag in SVG and assigning a proper ID according to the logic for iteration in the function.
<text

     id=“Person_color_1_1”

     ….>

Person_1_1

</text>

The id of the person in first badge and first line is represented as Person_color_1_1, where the first number denotes the number of badge and second number denotes the line number.

  1. Creating a class for the dimensions of the badges
class Dimen(object):
  def __init__(self, badges, badgeSize, paperSize):
      self.badges = badges
      self.badgeSize = badgeSize
      self.paperSize = paperSize
  1. Creating an initialiser function that stores the dimension objects
badge_config = {}


def init_dimen():
  paper_sizes = [‘A2’, ‘A3’, ‘A4’]
  for paper in paper_sizes:
      if paper == ‘A2’:
          badge_config.__setitem__(paper, {‘4×3’: Dimen(18, ‘4×3’, paper)})
          badge_config[paper][‘4.5×4’] = Dimen(15, ‘4.5×4’, paper)
      elif paper == ‘A3’:
          badge_config.__setitem__(paper, {‘4×3’: Dimen(8, ‘4×3’, paper)})
          badge_config[paper][‘4.5×4’] = Dimen(6, ‘4.5×4’, paper)
      elif paper == ‘A4’:
          badge_config.__setitem__(paper, {‘4×3’: Dimen(6, ‘4×3’, paper)})
          badge_config[paper][‘4.5×4’] = Dimen(2, ‘4.5×4’, paper) 
  1. Selecting the dimension config based on the parameters passed in the function.
dimensions = badge_config[paper_size][badge_size]
  1. Looping criteria is to loop through the number of badges mentioned in the dimension config and through the number of lines which will be five.
for idx in range(1, dimensions.badges + 1):

          for row in range(1, 6):
  1. Selecting the text element with the ID as provided above.
_id = ‘Person_color_{}_{}’.format(idx, row)
              path = element.xpath((“//*[@id='{}’]”).format(_id))[0]
  1. Fill the text color argument of the selected object by changing the value of fill.
style_detail[6] = “fill:” + str(fill[row])

That’s it and now when the loop runs each line will have its individual color as passed in the function. The choice of color is passed as the list named fill.

Resources

Continue Reading

Loading Default System Image of Event Topic on Open Event Server

In this blog, we will talk about how to add feature of loading system image of event topic from server to display it on Open Event Server. The focus is on adding a helper function to create system image and loading that local image onto server.

Helper function

In this feature, we are providing feature of addition of loading default system image if user doesn’t provides that.

  1. First we get a suitable filename for a image file using get_file_name() function.
  2. After getting filename, we check if the url provided by user is a valid url or not.
  3. If the url is invalid then we use the default system image as the image of that particular event topic.
  4. After getting the local image then we read that image, if the given image file or the default image is not readable or gives IOError then we send a message to the user that Image url is invalid.
  5. After successful reading of image we upload the image to event_topic directory in static directory of the project.
  6. After uploading of this image we get a local URL which shows where is the image is stored. This path is stored into database and finally we can display this image.

Resources

Continue Reading

Adding Custom System Roles API on Open Event Server

In this blog, we will talk about how to add API for accessing the Custom System Roles on Open Event Server. The focus is on Schema creation and it’s API creation.

Schema Creation

For the CustomSystemRoleSchema, we’ll make our Schema as follows

Now, let’s try to understand this Schema.

In this feature, we are providing Admin the rights to get and create more system roles.

  1. First of all, we are provide the two fields in this Schema, which are id and name.
  2. The very first attribute id should be of type string as it would have the identity which will auto increment when a new system role is created. Here dump_only means that this value can’t be changed after the record is created.
  3. Next attribute name should be of string type and it will contain the name of new custom system role. This attribute is required in a custom_system_roles table.

API Creation

For the Custom System Roles, we’ll make our API as follows

Now, let’s try to understand this Schema.

In this API, we are providing Admin the rights to set Custom System roles.

  1. CustomSystemRoleList inherits ResourceList which will give us list of all the custom system roles in the whole system.
  2. CustomSystemRoleList has a decorators attribute which gives the permission of POST request to only admins of the system.
  3. CustomSystemRoleDetail inherits ResourceDetail which will give the details of a CustomSystemRole object by id.
  4. CustomSystemRoleDetail has a decorators attribute which gives the permission of PATCH and DELETE requests to only admins of the system.

So, we saw how Custom System Role Schema and API is created to allow users to get it’s values and Admin users to update and delete it’s record.

Resources

Continue Reading

Adding Panel Permissions API in Open Event Server

In this blog, we will talk about how to add API for accessing the Panel Permissions on Open Event Server. The focus is on Schema creation and it’s API creation.

Schema Creation

For the PanelPermissionSchema, we’ll make our Schema as follows

Now, let’s try to understand this Schema.

In this feature, we are providing Admin the rights to create and assign panel permission to any of the custom system role.

  1. First of all, we are provide the four fields in this Schema, which are id, panel_name, role_id and can_access.
  2. The very first attribute id should be of type string as it would have the identity which will auto increment when a new system role is created. Here dump_only means that this value can’t be changed after the record is created.
  3. Next attribute panel_name should be of string type and it will contain the name of panel. This attribute is required in a panel_permissions table so set as allow_none=False.
  4. Next attribute role_id should be of integer type as it will tell us that to which role current panel is concerning.
  5. Next attribute can_access should be of boolean type as it will tell us whether a role of id=role_id has access to this panel or not.
  6. There is also a relationship named role which will give us the details of the custom system role with id=role_id.

API Creation

For the Panel Permissions, we’ll make our API as follows

Now, let’s try to understand this Schema.

In this API, we are providing Admin the rights to set panel permissions for a custom system role.

  1. PanelPermissionList inherits ResourceList which will give us list of all the custom system roles in the whole system.
  2. PanelPermissionList has a decorators attribute which gives the permission of both GET and POST requests to only admins of the system.
  3. The POST request of PanelPermissionList API requires the relationship of role.
  4. PanelPermissionDetail inherits ResourceDetail which will give the details of a Panel Permission object by id.
  5. PanelPermissionDetail has a decorators attribute which gives the permission of GET, PATCH and DELETE requests to only admins of the system.

So, we saw how Panel Permissions Schema and API is created to allow Admin users to get, update and delete it’s record.

Resources

 

Continue Reading
Close Menu
%d bloggers like this: