I2C Communication in PSLab

PSLab supports communication using the I2C protocol and both the Desktop App and the Android App have the framework set-up to use the I2C protocol. I2C protocol is mainly used by sensors which can be connected to PSLab. For supporting I2C communication, PSLab board has a separate block for I2C communication and has pins named 3.3V, GND, SCL and SDA. A brief overview of how I2C communication works and its advantages & limitations compared to SPI communication can be found here.

The PSLab Python and Java communication libraries have a class dedicated for I2C communication with numerous methods defined in them. The methods required for a particular I2C sensor may differ, however, in general most sensors utilise a certain common set of methods. The set of methods that are commonly used are listed below with their functions. For utilising the methods, the I2C bus is first notified using the HEADER byte (it is common to all the methods) and then a byte to uniquely determine the method in use.

The send method is used to send the data over the I2C bus. First the I2C bus is initialised and set to the correct slave address using I2C.start(address) followed by this method. The method takes the data to be sent as the argument.

def send(self, data):
    try:
        self.H.__sendByte__(CP.I2C_HEADER)
        self.H.__sendByte__(CP.I2C_SEND)
        self.H.__sendByte__(data)  # data byte
        return self.H.__get_ack__() >> 4
    except Exception as ex:
        self.raiseException(ex, "Communication Error , Function : " + inspect.currentframe().f_code.co_name)

 

The read method reads a fixed number of bytes from the I2C slave. One can also use I2C.simpleRead(address,  numbytes) instead to read from the I2C slave. This method takes the length of the data to be read as argument.  It fetches length-1 bytes with acknowledge bits for each.

def read(self, length):
     data = []
     try:
        for a in range(length - 1):
             self.H.__sendByte__(CP.I2C_HEADER)
             self.H.__sendByte__(CP.I2C_READ_MORE)
             data.append(self.H.__getByte__())
             self.H.__get_ack__()
       self.H.__sendByte__(CP.I2C_HEADER)
       self.H.__sendByte__(CP.I2C_READ_END)
       data.append(self.H.__getByte__())
       self.H.__get_ack__()
    except Exception as ex:
       self.raiseException(ex, "Communication Error , Function : " + inspect.currentframe().f_code.co_name)
   return data

 

The readBulk method reads the data from the I2C slave. This takes the I2C slave device address, the address of the device from which the data is to be read and the length of the data to be read as argument and the returns the bytes read in the form of a list.

def readBulk(self, device_address, register_address, bytes_to_read):
        try:
            self.H.__sendByte__(CP.I2C_HEADER)
            self.H.__sendByte__(CP.I2C_READ_BULK)
            self.H.__sendByte__(device_address)
            self.H.__sendByte__(register_address)
            self.H.__sendByte__(bytes_to_read)
            data = self.H.fd.read(bytes_to_read)
            self.H.__get_ack__()
            try:
                return [ord(a) for a in data]
            except:
                print('Transaction failed')
                return False
        except Exception as ex:
           self.raiseException(ex, "Communication Error , Function : " + inspect.currentframe().f_code.co_name)

 

The writeBulk method writes the data to the I2C slave. It takes address of the particular I2C slave for which the data is to be written and the data to be written as arguments.

def writeBulk(self, device_address, bytestream):
        try:
            self.H.__sendByte__(CP.I2C_HEADER)
            self.H.__sendByte__(CP.I2C_WRITE_BULK)
            self.H.__sendByte__(device_address)
            self.H.__sendByte__(len(bytestream))
            for a in bytestream:
                self.H.__sendByte__(a)
            self.H.__get_ack__()
        except Exception as ex:
  self.raiseException(ex, "Communication Error , Function : " + inspect.currentframe().f_code.co_name)

 

The scan method scans the I2C port for connected devices which utilise I2C as a communication mode. It takes frequency as an argument to set the frequency of the communication and is by default set to 100000. An array containing the addresses of the connected devices (which are integers) is returned.

def scan(self, frequency=100000, verbose=False):
        self.config(frequency, verbose)
        addrs = []
        n = 0
        if verbose:
            print('Scanning addresses 0-127...')
            print('Address', '\t', 'Possible Devices')
        for a in range(0, 128):
            x = self.start(a, 0)
            if x & 1 == 0:  # ACK received
                addrs.append(a)
                if verbose: print(hex(a), '\t\t', self.SENSORS.get(a, 'None'))
                n += 1
            self.stop()
       return addrs

 

Additional Sources

  1. Learn more about the principles behind i2c communication https://learn.sparkfun.com/tutorials/i2c
  2. A simple experiment to demonstrate use of i2c communication with Arduino http://howtomechatronics.com/tutorials/arduino/how-i2c-communication-works-and-how-to-use-it-with-arduino/
  3. Java counterpart of the PSLab I2C library https://github.com/fossasia/pslab-android/blob/master/app/src/main/java/org/fossasia/pslab/communication/peripherals/I2C.java

Adding Build Type option in the Apk Generator of the Open Event Android App

The apk-generator provided ability to the event organiser to build an apk from a single click by providing the necessary json/binary files. However it gave only one type of apk where on the other hand the Open Event Android was available with apk of different build versions.

Recently the functionality of the apk generator of the Open Event Android App was enhanced where the user is asked to select an option for build type either Google Play or FDroid which generates the apk according to that selected type.

The main difference in the googleplay apk and fdroid apk is the inclusion of googleplay libraries which aren’t included in the app/build.gradle file in case of fdroid build.

To include support for build type the following files for the apk-generator had to be changed:

  1. app/views/__init__.py
  2. app/tasks/__init__.py
  3. app/static/js/main.js
  4. app/generator/generator.py
  5. scripts/build.sh
  6. app/templates/index.html

Changes to the files

  • app/templates/index.html

This file was where the changes to the UI of the apk-generator showing build type option to the user were made. With this the user was presented with an option to choose build type among googleplay and fdroid apart from the rest of the essential information.

  • scripts/build.sh

The android app supported two different flavours one for google play and the other for fdroid. This required the build script to be modified according to the build type selected by the user during the filling of form.

If user selected “Google Play” or “Fdroid”, the script would look something like this:

#!/bin/bash
./gradlew assemblegoogleplayRelease --info
echo "signing"
jarsigner -keystore ${KEYSTORE_PATH} -storepass ${KEYSTORE_PASSWORD} app/build/outputs/apk/app-googleplay-release-unsigned.apk ${KEY_ALIAS}
echo "zipaligning"
${1}/zipalign -v 4 app/build/outputs/apk/app-$2-release-unsigned.apk release.apk
echo "done"

Where $2 is googleplay or fdroid depending on what build type the user has selected while building the apk from the apk generator.

  • app/views/__init__.py and app/tasks/__init__.py

These files were modified by adding another parameter for supporting the two build type options in the desired functions.

  • app/static/js/main.js

This is where the option selected by the user was taken and accordingly the apk corresponding to the build type option selected was made available to the user. The code for it was shown as follows:

$buildTypeRadio.change(
   function () {
       if (this.checked) {
           enableGenerateButton(true);
           buildType = $(this).val();
       }
  }
);

This was how the option to display build type option to the user was incorporated. This gave the user the ability to install different build versions of an apk, thus making it more useful from the user point of view.

Related Links:

Running Dredd Hooks as a Flask App in the Open Event Server

The Open Event Server is based on the micro-framework Flask from its initial phases. After implementing API documentation, we decided to implement the Dredd testing in the Open Event API.

After isolating each request in Dredd testing, the real challenge is now to bind the database engine to the Dredd Hooks. And as we have been using Flask-SQLAlchemy db.Model Baseclass for building all the models and Flask, being a micro framework itself, came to our rescue as we could easily bind the database engine to the Flask app. Conventionally dredd hooks are written in pure Python, but we will running them as a self contained Flask app itself.

How to initialise this flask app in our dredd hooks. The Flask app can be initialised in the before_all hook easily as shown below:

def before_all(transaction):
    app = Flask(__name__)
    app.config.from_object('config.TestingConfig')

 

The database can be binded to the app as follows:

def before_all(transaction):
app = Flask(__name__)
app.config.from_object('config.TestingConfig')
db.init_app(app)
Migrate(app, db)

 

The challenge now is how to bind the application context when applying the database fixtures. In a normal Flask application this can be done as following:

with app.app_context():
#perform your operation

 

While for unit tests in python:

with app.test_request_context():
#perform tests

 

But as all the hooks are separate from each other, Dredd-hooks-python supports idea of a single stash list where you can store all the desired variables(a list or the name stash is not necessary).

The app and db can be added to stash as shown below:

@hooks.before_all
def before_all(transaction):
app = Flask(__name__)
app.config.from_object('config.TestingConfig')
db.init_app(app)
Migrate(app, db)
stash['app'] = app
stash['db'] = db

 

These variables stored in the stash can be used efficiently as below:

@hooks.before_each
def before_each(transaction):
with stash['app'].app_context():
db.engine.execute("drop schema if exists public cascade")
db.engine.execute("create schema public")
db.create_all()

 

and many other such examples.

Related Links:
1. Testing Your API Documentation With Dredd: https://matthewdaly.co.uk/blog/2016/08/08/testing-your-api-documentation-with-dredd/
2. Dredd Tutorial: https://help.apiary.io/api_101/dredd-tutorial/
3. Dredd Docs: http://dredd.readthedocs.io/

Adding IBM Watson TTS Support in Susi Assistant on Raspberry Pi

Susi Hardware project aims at creating a smart assistant for your home that you can run on your Raspberry Pi or similar Development Boards.
I previously wrote a blog on choosing a perfect Text to Speech engine for Susi AI and had used Flite as the solution for it. While Flite is an Open Source solution that can run locally on a client, it does not provide the same quality of voice and speed as cloud providers. We always crave for a more natural voice for better interaction with our assistant. It is always good to have more options. We, therefore, added IBM Watson Text to Speech API in SUSI Hardware project.

IBM Watson TTS can be added to a Python Project easily using the IBM Watson Developer SDK.

For using the IBM Watson Developer SDK for Text to Speech, first of all, we need to sign up for Bluemix
https://console.bluemix.net/registration/

After that, we will get the empty dashboard without any service added currently. We need to create a Text to Speech Service. To do so, click on Create Watson Service button

    

Select Watson on the left pane and then select Text to Speech service from the list.

Select the standard plan from the options and then click on create button.

You will get service credentials for your newly created text to speech service. Save it for future reference.

After that, we need to add Watson developer cloud python package.

sudo pip3 install watson-developer-cloud

On Ubuntu with Python 3.5 watson-developer-cloud has some extra dependencies. Install them using the following command.

sudo apt install libssl-dev

Now we can add Text to Speech to our project. For that, we need to first import TextToSpeechV1 library. It can be added using following import statement.

from watson_developer_cloud import TextToSpeechV1

Now we need to create a new TextToSpeechV1 object using the Service Credentials we created earlier.

text_to_speech = TextToSpeechV1(
   username='API_USERNAME',
   password='API_PASSWORD')

We can now perform synthesis of a text input and write the incoming speech stream from IBM Watson API to a file.

with open('output.wav', 'wb') as audio_file:
   audio_file.write(
       text_to_speech.synthesize(text, accept='audio/wav’, voice='en-US_AllisonVoice'))

In the above code snippet,  we are opening an output file ‘output.wav’ for writing. We then write the binary audio data returned by text_to_speech.synthesize method. IBM Watson provides many free voices. We supply an argument specifying which voice we need to use. We are using English female ‘en-US_AllisonVoice’. You may test out more voices in the online demo here and select the voice that you find best.

We can play the ‘output.wav’ file using the play command from SoX. To do so, we need to install SoX binary.

sudo apt install sox libsox-fmt-all

We can play the file easily now using the following code.

import os
os.system('play output.wav')

The above code invokes the ‘play’ command from the SoX package to play the audio file. We can also use PyAudio to play the audio file but it would require us to manage the audio thread separately. Thus, SoX is a better solution.

Resources:

Setup SUSI Assistant on Raspberry Pi in under 30 minutes

With our ever growing list of list of platforms supported by Susi AI, we now have a client that can run on Raspberry Pi and you can access it hands-free!! Here is a video that you can refer for its working.

But it might have left you wondering how you can replicate such a setup yourself? It is fairly easy and will be done fairly easy. Just follow the following instructions.

You need to have following hardware in order to have your own SUSI Assistant running on Raspberry Pi.

  • A Raspberry Pi (prefer 2 or 3) with Raspbian Jessie OS.
  • A stable internet connection.  ( Recommended 4 Mbps )
  • A USB Microphone /  USB Webcam with Microphone. You may buy one like this.
  • A Speaker that connects through 3.5mm jack. You may buy one like this.

After you get all the above items in order, you need to get access to a terminal of your Raspberry Pi. You can have that by either connecting a monitor to Raspberry Pi temporarily or by connecting to Raspberry Pi over SSH.

Once this is done, next step is the installation of the dependencies. The installation of the SUSI on Raspberry is automated after dependencies are installed. Run the following command on Raspberry Pi terminal.

sudo apt install git swig3.0 portaudio19-dev pulseaudio libpulse-dev unzip sox libatlas-dev libatlas-base-dev libsox-fmt-all python3

After this, you may check if your output and input devices are working alright. For this, run rec recording.wav . It will start recording audio and saving it to a file named recording.wav. Play back the file using play recording.wav If you hear your audio clearly, setup is done right else you need to configure your Audio Devices correctly.  Most of the time the configuration of Audio works out the box and devices are plug and play so you would not encounter any errors. If you are successful in configuring your devices, install extra dependencies for SUSI Hardware by running the automated install script. In your terminal run,

$ git clone https://github.com/fossasia/susi_hardware.git
$ cd susi_hardware
$ ./install.sh 

This will install all the remaining dependencies. After the above step is complete, you may run configuration file generator script to choose the Text to Speech and Speech to Text service according to your wish. For doing so, you need to run

$ python3 config_generator.py

Follow the instructions in the script. It will ask you to configure the default service for Text to Speech and Speech to Text and other options. After the configuration is complete, you can simply run the following command to start SUSI.

$ python3 main.py

This will start SUSI in a continuously listening mode. You may invoke SUSI anytime, just by saying SUSI followed by a query. The query will be answered by SUSI subsequently.

Since configurations for different hardware devices may vary, you may encounter some problems. In such a scenario, you may refer to the following resources to solve the issues.

Resources:

Building the Meilix Generator with Flask

Meilix Generator is a webapp which is used to trigger the Travis build of Meilix and mail the user the link of the iso. Meilix Generator webapp is based on Flask. This blog shows that how easy is to build a webapp and take the HTML files to render it into the webapp as well as to call and pass various function. Here I used Flask, the Python framework to render the HTML templates and send requests for various purposes (mentioned later in the article) without coding everything from scratch because of import facility of the Flask.

What is Flask?

Flask is a Python micro web framework based on Werkzeug, Jinja 2 template engine. It is used as the backbone of the webapp. It features us with a whole set of Python from which we can easily generate webapp. It is micro as it has no tools and no library itself. It come up with minimum requirements and one who needs can import different library and use it. And I used several import function for Meilix Generator like render_template, send_from_directory, etc.

Implementation (The use case in Meilix Generator)

First of all, the installation process: We will do the installation in a virtual environment. We prefer virtual environment to differentiate the Python working environment since few programs are there which require different Python versions to work.
Install virtual environment 

sudo pip install virtualenv

Now go to the folder (project) and activate it using

. venv/bin/activate

Now install Flask

pip install flask
Creating your project

Now it’s time to create a simple project in the directory.
Let’s use HTML as the frontend. In the folder create styles.css for styling and index.html template for the frontend of the page.We will make one app.py file which would look similar to this: 

from flask import Flask, render_template
app = Flask(__name__)
@app.route('/')
def index():
	"""Index page"""
	return render_template("index.html")
if __name__ == '__main__':
    app.run()

Flask looks for the / (root) path and here the root return the main template (index.html) which is the main function.

Compiling it to view the page:

export FLASK_DEBUG=1 FLASK_APP=app.py
flask run

You will find your page at http://127.0.0.1:5000

More options (how more it can help you)

  • Add more HTML template options and refer it in app.py
  • Easily use Github API  from a different .py file (this file should get import to app.py) to fetch data like: https://api.github.com/users/user_name : It will fetch user name, repos, followers and many more important information.

How I used this idea for FOSSASIA (Meilix Generator)

I used Flask for the backbone of project Meilix Generator. First, I used from function to import various library needed for the project and then made several functions for the same. Let’s understand the concept using few example:

from flask import Flask, render_template
@app.route('/about')
def about():
		#About page
		return render_template("about.html")

or

from flask import Flask, send_from_directory
@app.route('/uploads/<filename>')
def uploaded_file(filename):
		return send_from_directory(app.config['UPLOAD_FOLDER'],filename)

For more details file app.py can be found here of the Meilix Generator repository where we used the above idea.

Important Links and Repositories:

Testing Errors and Exceptions Using Unittest in Open Event Server

Like all other helper functions in FOSSASIA‘s Open Event Server, we also need to test the exception and error helper functions and classes. The error helper classes are mainly used to create error handler responses for known errors. For example we know error 403 is Access Forbidden, but we want to send a proper source message along with a proper error message to help identify and handle the error, hence we use the error classes. To ensure that future commits do not mismatch the error, we implemented the unit tests for errors.

There are mainly two kind of error classes, one are HTTP status errors and the other are the exceptions. Depending on the type of error we get in the try-except block for a particular API, we raise that particular exception or error.

Unit Test for Exception

Exceptions are written in this form:

@validates_schema
    def validate_quantity(self, data):
        if 'max_order' in data and 'min_order' in data:
            if data['max_order'] < data['min_order']:
                raise UnprocessableEntity({'pointer': '/data/attributes/max-order'},
                                          "max-order should be greater than min-order")

 

This error is raised wherever the data that is sent as POST or PATCH is unprocessable. For example, this is how we raise this error:

raise UnprocessableEntity({'pointer': '/data/attributes/min-quantity'},

           "min-quantity should be less than max-quantity")

This exception is raised due to error in validation of data where maximum quantity should be more than minimum quantity.

To test that the above line indeed raises an exception of UnprocessableEntity with status 422, we use the assertRaises() function. Following is the code:

 def test_exceptions(self):
        # Unprocessable Entity Exception
        with self.assertRaises(UnprocessableEntity):
            raise UnprocessableEntity({'pointer': '/data/attributes/min-quantity'},
                                      "min-quantity should be less than max-quantity")


In the above code,
with self.assertRaises() creates a context of exception type, so that when the next line raises an exception, it asserts that the exception that it was expecting is same as the exception raised and hence ensures that the correct exception is being raised

Unit Test for Error

In error helper classes, what we do is, for known HTTP status codes we return a response that is user readable and understandable. So this is how we raise an error:

ForbiddenError({'source': ''}, 'Super admin access is required')

This is basically the 403: Access Denied error. But with the “Super admin access is required” message it becomes far more clear. However we need to ensure that status code returned when this error message is shown still stays 403 and isn’t modified in future unwantedly.

Here, errors and exceptions work a little different. When we declare a custom error class, we don’t really raise that error. Instead we show that error as a response. So we can’t use the assertRaises() function. However what we can do is we can compare the status code and ensure that the error raised is the same as the expected one. So we do this:

def test_errors(self):
        with app.test_request_context():
            # Forbidden Error
            forbidden_error = ForbiddenError({'source': ''}, 'Super admin access is required')
            self.assertEqual(forbidden_error.status, 403)

            # Not Found Error
            not_found_error = NotFoundError({'source': ''}, 'Object not found.')
            self.assertEqual(not_found_error.status, 404)


Here we firstly create an object of the error class
ForbiddenError with a sample source and message. We then assert that the status attribute of this object is 403 which ensures that this error is of the Access Denied type using the assertEqual() function, which is what was expected.
The above helps us maintain that no one in future unknowingly or by mistake changes the error messages and status code so as to maintain the HTTP status codes in the response.


Resources:

Open Event Server: Testing Image Resize Using PIL and Unittest

FOSSASIA‘s Open Event Server project uses a certain set of functions in order to resize image from its original, example to thumbnail, icon or larger image. How do we test this resizing of images functions in Open Event Server project? To test image dimensions resizing functionality, we need to verify that the the resized image dimensions is same as the dimensions provided for resize.  For example, in this function, we provide the url for the image that we received and it creates a resized image and saves the resized version.

def create_save_resized_image(image_file, basewidth, maintain_aspect, height_size, upload_path,
                              ext='jpg', remove_after_upload=False, resize=True):
    """
    Create and Save the resized version of the background image
    :param resize:
    :param upload_path:
    :param ext:
    :param remove_after_upload:
    :param height_size:
    :param maintain_aspect:
    :param basewidth:
    :param image_file:
    :return:
    """
    filename = '{filename}.{ext}'.format(filename=get_file_name(), ext=ext)
    image_file = cStringIO.StringIO(urllib.urlopen(image_file).read())
    im = Image.open(image_file)

    # Convert to jpeg for lower file size.
    if im.format is not 'JPEG':
        img = im.convert('RGB')
    else:
        img = im

    if resize:
        if maintain_aspect:
            width_percent = (basewidth / float(img.size[0]))
            height_size = int((float(img.size[1]) * float(width_percent)))

        img = img.resize((basewidth, height_size), PIL.Image.ANTIALIAS)

    temp_file_relative_path = 'static/media/temp/' + generate_hash(str(image_file)) + get_file_name() + '.jpg'
    temp_file_path = app.config['BASE_DIR'] + '/' + temp_file_relative_path
    dir_path = temp_file_path.rsplit('/', 1)[0]

    # create dirs if not present
    if not os.path.isdir(dir_path):
        os.makedirs(dir_path)

    img.save(temp_file_path)
    upfile = UploadedFile(file_path=temp_file_path, filename=filename)

    if remove_after_upload:
        os.remove(image_file)

    uploaded_url = upload(upfile, upload_path)
    os.remove(temp_file_path)

    return uploaded_url


In this function, we send the
image url, the width and height to be resized to, and the aspect ratio as either True or False along with the folder to be saved. For this blog, we are gonna assume aspect ratio is False which means that we don’t maintain the aspect ratio while resizing. So, given the above mentioned as parameter, we get the url for the resized image that is saved.
To test whether it has been resized to correct dimensions, we use Pillow or as it is popularly know, PIL. So we write a separate function named getsizes() within which get the image file as a parameter. Then using the Image module of PIL, we open the file as a JpegImageFile object. The JpegImageFile object has an attribute size which returns (width, height). So from this function, we return the size attribute. Following is the code:

def getsizes(self, file):
        # get file size *and* image size (None if not known)
        im = Image.open(file)
        return im.size


As we have this function, it’s time to look into the unit testing function. So in unit testing we set dummy width and height that we want to resize to, set aspect ratio as false as discussed above. This helps us to test that both width and height are properly resized. We are using a creative commons licensed image for resizing. This is the code:

def test_create_save_resized_image(self):
        with app.test_request_context():
            image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'
            width = 500
            height = 200
            aspect_ratio = False
            upload_path = 'test'
            resized_image_url = create_save_resized_image(image_url_test, width, aspect_ratio, height, upload_path, ext='png')
            resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
            resized_width, resized_height = self.getsizes(resized_image_file)


In the above code from
create_save_resized_image, we receive the url for the resized image. Since we have written all the unittests for local settings, we get a url with localhost as the server set. However, we don’t have the server running so we can’t acces the image through the url. So we build the absolute path to the image file from the url and store it in resized_image_file. Then we find the sizes of the image using the getsizes function that we have already written. This  gives us the width and height of the newly resized image. We make an assertion now to check whether the width that we wanted to resize to is equal to the actual width of the resized image. We make the same check with height as well. If both match, then the resizing function had worked perfectly. Here is the complete code:

def test_create_save_resized_image(self):
        with app.test_request_context():
            image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'
            width = 500
            height = 200
            aspect_ratio = False
            upload_path = 'test'
            resized_image_url = create_save_resized_image(image_url_test, width, aspect_ratio, height, upload_path, ext='png')
            resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
            resized_width, resized_height = self.getsizes(resized_image_file)
            self.assertTrue(os.path.exists(resized_image_file))
            self.assertEqual(resized_width, width)
            self.assertEqual(resized_height, height)


In open event orga server, we use this resize function to basically create 3 resized images in various modules, such as events, users,etc. The 3 sizes are names – Large, Thumbnail and Icon. Depending on the one more suitable we use it avoiding the need to load a very big image for a very small div. The exact width and height for these 3 sizes can be changed from the admin settings of the project. We use the same technique as mentioned above. We run a loop to check the sizes for all these. Here is the code:

def test_create_save_image_sizes(self):
        with app.test_request_context():
            image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'
            image_sizes_type = "event"
            width_large = 1300
            width_thumbnail = 500
            width_icon = 75
            image_sizes = create_save_image_sizes(image_url_test, image_sizes_type)

            resized_image_url = image_sizes['original_image_url']
            resized_image_url_large = image_sizes['large_image_url']
            resized_image_url_thumbnail = image_sizes['thumbnail_image_url']
            resized_image_url_icon = image_sizes['icon_image_url']

            resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
            resized_image_file_large = app.config.get('BASE_DIR') + resized_image_url_large.split('/localhost')[1]
            resized_image_file_thumbnail = app.config.get('BASE_DIR') + resized_image_url_thumbnail.split('/localhost')[1]
            resized_image_file_icon = app.config.get('BASE_DIR') + resized_image_url_icon.split('/localhost')[1]

            resized_width_large, _ = self.getsizes(resized_image_file_large)
            resized_width_thumbnail, _ = self.getsizes(resized_image_file_thumbnail)
            resized_width_icon, _ = self.getsizes(resized_image_file_icon)

            self.assertTrue(os.path.exists(resized_image_file))
            self.assertEqual(resized_width_large, width_large)
            self.assertEqual(resized_width_thumbnail, width_thumbnail)
            self.assertEqual(resized_width_icon, width_icon)

Resources:

Creating Unit Tests for File Upload Functions in Open Event Server with Python Unittest Library

In FOSSASIA‘s Open Event Server, we use the Python unittest library for unit testing various modules of the API code. Unittest library provides us with various assertion functions to assert between the actual and the expected values returned by a function or a module. In normal modules, we simply use these assertions to compare the result since the parameters mostly take as input normal data types. However one very important area for unittesting is File Uploading. We cannot really send a particular file or any such payload to the function to unittest it properly, since it expects a request.files kind of data which is obtained only when file is uploaded or sent as a request to an endpoint. For example in this function:

def uploaded_file(files, multiple=False):
    if multiple:
        files_uploaded = []
        for file in files:
            extension = file.filename.split('.')[1]
            filename = get_file_name() + '.' + extension
            filedir = current_app.config.get('BASE_DIR') + '/static/uploads/'
            if not os.path.isdir(filedir):
                os.makedirs(filedir)
            file_path = filedir + filename
            file.save(file_path)
            files_uploaded.append(UploadedFile(file_path, filename))

    else:
        extension = files.filename.split('.')[1]
        filename = get_file_name() + '.' + extension
        filedir = current_app.config.get('BASE_DIR') + '/static/uploads/'
        if not os.path.isdir(filedir):
            os.makedirs(filedir)
        file_path = filedir + filename
        files.save(file_path)
        files_uploaded = UploadedFile(file_path, filename)

    return files_uploaded


So, we need to create a mock uploading system to replicate this check. So inside the unittesting function we create an api route for this particular scope to accept a file as a request. Following is the code:

@app.route("/test_upload", methods=['POST'])
        def upload():
            files = request.files['file']
            file_uploaded = uploaded_file(files=files)
            return jsonify(
                {'path': file_uploaded.file_path,
                 'name': file_uploaded.filename})


In the above code, it creates an app route with endpoint test_upload. It accepts a request.files. Then it sends this object to the
uploaded_file function (the function to be unittested), gets the result of the function, and returns the result in a json format.
With this we have the endpoint to mock a file upload ready. Next we need to send a request with file object. We cannot send a normal data which would then be treated as a normal request.form. But we want to receive it in request.files. So we create 2 different classes inheriting other classes.

def test_upload_single_file(self):

        class FileObj(StringIO):

            def close(self):
                pass

        class MyRequest(Request):
            def _get_file_stream(*args, **kwargs):
                return FileObj()

        app.request_class = MyRequest


MyRequest
class inherits the Request class of Flask framework. We define the file stream of the Request class as the FileObj. Then, we set the request_class attribute of the Flask app to this new MyRequest class.
After we have it all setup, we need to send the request and see if the uploaded file is being saved properly or not. For this purpose we take help of StringIO library. StringIO creates a file-like class which can be then used to replicate a file uploading system. So we send the data as {‘file’: (StringIO(‘1,2,3,4’), ‘test_file.csv’)}. We send this as data to the /test_upload endpoint that we have created previously. As a result, the endpoint receives the function, saves the file, and returns the filename and file_path for the stored file.

 with app.test_request_context():
            client = app.test_client()
            resp = client.post('/test_upload', data = {'file': (StringIO('1,2,3,4'), 'test_file.csv')})
            data = json.loads(resp.data)
            file_path = data['path']
            filename = data['name']
            actual_file_path = app.config.get('BASE_DIR') + '/static/uploads/' + filename
            self.assertEqual(file_path, actual_file_path)
            self.assertTrue(os.path.exists(file_path))


After this is done, we need to check if the file_path that we receive is the expected file path that we should get. Secondly, we also check whether the file was really created or is this just some dummy data sent. We get the expected path by this:

actual_file_path = app.config.get('BASE_DIR') + '/static/uploads/' + filename.

Then we assert that actual_file_path is same as the resulting path we received using the assertEqual. Thirdly, we use assertTrue to ensure that there is a file in that path. That is,

self.assertTrue(os.path.exists(file_path))

Which gives a True if file exists or False if not.

So that basically sums up the unittesting.
1) If the file is saved in the correct path, and
2) The file actually exist
The the unittest passes only if both is True and is thus successful. Else we get either an error or a failure.

Following is the entire code snippet for this unit testing function:

def test_upload_single_file(self):

        class FileObj(StringIO):

            def close(self):
                pass

        class MyRequest(Request):
            def _get_file_stream(*args, **kwargs):
                return FileObj()

        app.request_class = MyRequest

        @app.route("/test_upload", methods=['POST'])
        def upload():
            files = request.files['file']
            file_uploaded = uploaded_file(files=files)
            return jsonify(
                {'path': file_uploaded.file_path,
                 'name': file_uploaded.filename})

        with app.test_request_context():
            client = app.test_client()
            resp = client.post('/test_upload', data = {'file': (StringIO('1,2,3,4'), 'test_file.csv')})
            data = json.loads(resp.data)
            file_path = data['path']
            filename = data['name']
            actual_file_path = app.config.get('BASE_DIR') + '/static/uploads/' + filename
            self.assertEqual(file_path, actual_file_path)
            self.assertTrue(os.path.exists(file_path))

Resources:

Characteristization of Transistors Using PSLab

Transistors are one of the key building blocks of all electronics. They are fundamentally three-terminal semiconductor devices, with the terminals being labelled as the Emitter(E), Base(B), and Collector(C). These active components are found everywhere in electronics, and all of the complex processors that power everything from cellphones to aircraft employ millions of these devices in switching and amplification roles. In this blog post, we shall use the PSLab to explore some of the fundamental properties of transistors, and their various applications.

Transistor as an amplifier

In the schematic shown, we shall try to use an NPN transistor to amplify a small signal.

A small amplitude oscillation generated by W1 with the amplitude knob turned down to a very low level is used as the input. Since transistors do not handle bipolar signals, we have mixed a constant DC voltage generated PV3 to shift this small signal into the positive domain.

The fluctuating potential difference incident at the base of the transistor creates a corresponding current flow between the Base and Emitter.

By the fundamental property of transistors, this influences the path resistance between the Collector(C) and the Emitter(E) , and the resultant amplified voltage output can be monitored at the junction between the 1K resistor and the collector.

We have used CH1 to monitor the input voltage, and CH2 for the output

In a more applied scenario, we can implement the second schematic in order to create an audio amplifier. Instead of using W1 as the input signal, a speaker is used as a microphone. When a sound signal is incident on the speaker, its membrane oscillates, and as a result , the coil attached to it also does the same. Since this coil is placed in a magnetic field , its oscillations result in a change in the magnetic flux passing through, and this change causes a voltage(EMF) induced at its output. We then use our transistor amplifier to amplify this small EMF

Figure 2 : A Transistor being used to apply a gain of 81.5x to a small amplitude sine wave. The input waveform (green) is shown on a +/-500mV full scale and the output waveform is shown on a +/-8V scale in order to be able to view both. However, due to the difference in scales, the actual difference in amplitudes is 16 times more than what is visible.
Common Emitter Characteristics
Schematic Diagram

Any introductory course on transistors includes a diagram similar to the one shown , and a description about how for any base current, the collector current eventually saturates, and that this saturation level is proportional to the base current itself.

With the PSLab’s transistor CE characterization app, we can set up this experiment, and verify this for ourselves using an NPN transistor. The results shown were gathered using a 2N2222 transistor

In the schematic , the base current is determined by the voltage source PV2, and a high value series resistor of 200 KOhms . We use an analog input CH3 to monitor the voltage present at the Base of the transistor in order to calculate the total base current.

Base current = V/R = (PV2 – CH3) / 200e3

Now that we have set a particular base current, PV1 is used to sequentially increase the voltage across the collector and emitter of the transistor. A current limiting resistor of 1K Ohm is used, and CH1 is used to monitor the voltage drop across the transistor.

Collector current = V/R = (PV1 – CH1) / 1e3

Plotting the behaviour of the collector current with respect to the collector voltage gives us the familiar current voltage characteristics of a transistor.

Figure 3: Common emitter characteristics of an NPN transistor (2N2222) for various base currents

We can now alter the base current by changing PV2, and verify that the saturation current for the collector is indeed a function of it.

Resources