Implementing Endpoint to Resend Email Verification

Earlier, when a user registered via Open Event Frontend, s/he received a verification link via email to confirm their account. However, this was not enough in the long-term. If the confirmation link expired, or for some reasons the verification mail got deleted on the user side, there was no functionality to resend the verification email, which prevented the user from getting fully registered. Although the front-end already showed the option to resend the verification link, there was no support from the server to do that, yet.

So it was decided that a separate endpoint should be implemented to allow re-sending the verification link to a user. /resend-verification-email was an endpoint that would fit this action. So we decided to go with it and create a route in `auth.py` file, which was the appropriate place for this feature to reside. First step was to do the necessary imports and then definition:

from app.api.helpers.mail import send_email_confirmation
from app.models.mail import USER_REGISTER_WITH_PASSWORD
...
...
@auth_routes.route('/resend-verification-email', methods=['POST'])
def resend_verification_email():
...

Now we safely fetch the email mentioned in the request and then search the database for the user corresponding to that email:

def resend_verification_email():
    try:
        email = request.json['data']['email']
    except TypeError:
        return BadRequestError({'source': ''}, 'Bad Request Error').respond()

    try:
        user = User.query.filter_by(email=email).one()
    except NoResultFound:
        return UnprocessableEntityError(
{'source': ''}, 'User with email: ' + email + ' not found.').respond()
    else:

    ...

Once a user has been identified in the database, we proceed further and create an essentially unique hash for the user verification. This hash is in turn used to generate a verification link that is then ready to be sent via email to the user:

else:
    serializer = get_serializer()
    hash_ = str(base64.b64encode(str(serializer.dumps(
[user.email, str_generator()])).encode()), 'utf-8')
    link = make_frontend_url(
'/email/verify'.format(id=user.id), {'token': hash_})

Finally, the email is sent:

send_email_with_action(
user, USER_REGISTER_WITH_PASSWORD,
app_name=get_settings()['app_name'], email=user.email)
    if not send_email_confirmation(user.email, link):
        return make_response(jsonify(message="Some error occured"), 500)
    return make_response(jsonify(message="Verification email resent"), 200)

But this was not enough. When the endpoint was tested, it was found that actual emails were not being delivered, even after correctly configuring the email settings locally. So, after a bit of debugging, it was found that the settings, which were using Sendgrid to send emails, were using a deprecated Sendgrid API endpoint. A separate email function is used to send emails via Sendgrid and it contained an old endpoint that was no longer recommended by Sendgrid:

@celery.task(name='send.email.post')
def send_email_task(payload, headers):
   requests.post(
       "https://api.sendgrid.com/api/mail.send.json",
       data=payload,
       headers=headers
   )

The new endpoint, as per Sendgrid’s documentation, is:

https://api.sendgrid.com/v3/mail/send

But this was not the only change required. Sendgrid had also modified the structure of requests they accepted, and the new structure was different from the existing one that was used in the server. Following is the new structure:

'{"personalizations": [{"to": [{"email": "example@example.com"}]}],"from": {"email": "example@example.com"},"subject": "Hello, World!","content": [{"type": "text/plain", "value": "Heya!"}]}'

The header structure was also changed, so the structure in the server was also updated to

headers = {
"Authorization": ("Bearer " + key),
"Content-Type": "application/json"
}

The Sendgrid function (which is executed as a Celery task) was modified as follows, to incorporate the changes in the API endpoint and structure:

import json
...
@celery.task(name='send.email.post')
def send_email_task(payload, headers):
    data = {"personalizations": [{"to": []}]}
    data["personalizations"][0]["to"].append({"email": payload["to"]})
    data["from"] = {"email": payload["from"]}
    data["subject"] = payload["subject"]
    data["content"] = [{"type": "text/html", "value": payload["html"]}]
    requests.post(
        "https://api.sendgrid.com/v3/mail/send",
        data=json.dumps(data),
        headers=headers,
        verify=False  # doesn't work with verification in celery context
    )

 

As can be seen, there is a bug that doesn’t allow SSL verification within the celery context. However, the verification is successful when the functionality is executed independent of the celery context. But now email sending via Sendgrid actually works, which makes our verification resend endpoint functional:Screen Shot 2018-08-10 at 10.04.12 PM.pngEmail is received successfully by the recipient:

Screen Shot 2018-08-10 at 10.04.30 PM.png

Thus, a working email verification endpoint is implemented, which can be easily integrated in the frontend.


Resources:

Continue ReadingImplementing Endpoint to Resend Email Verification

Adding Check-in Attributes to Tickets

Recently, it was decided by the Open Event Orga App team that the event ticket API response from Open Event Server should have two additional attributes for specifying event check-in access. At first sight, it seemed that adding these options will only require changes in the orga app, but it turned out that the entire Ticket API from the server will need this addition.

Implementing these attributes turned out to be quite straightforward. Specifically, the fields to be added were boolean is_checkin_restricted and auto_checkin_enabled. By default, checkin is not automatic and is restricted. Therefore, the default values for these fields were chosen to be True and False respectively. To add them, the ticket model file was changed first – due to the addition of these two columns:

class Ticket(SoftDeletionModel):
    ...
    is_checkin_restricted = db.Column(db.Boolean)  # <--
    auto_checkin_enabled = db.Column(db.Boolean)  # <--
    ...
    def __init__(self,
        name=None,
        event_id=None,
        ...
        is_checkin_restricted=True,
        auto_checkin_enabled=False):

        self.name = name
        ...
        self.is_checkin_restricted = is_checkin_restricted
        self.auto_checkin_enabled = auto_checkin_enabled
        ...

Since the ticket database model was updated, a migration had to be performed. Following shell commands (at the open event server project root) did the migration and database update and a migration file was then generated:

$ python manage.py db migrate
$ python manage.py db upgrade

Here’s the generated migration file:

from alembic import op
import sqlalchemy as sa

revision = '6440077182f0'
down_revision = 'eaa029ebb260'

def upgrade():
    op.add_column('tickets', sa.Column('auto_checkin_enabled', sa.Boolean(), nullable=True))
    op.add_column('tickets', sa.Column('is_checkin_restricted', sa.Boolean(), nullable=True))

def downgrade():
    op.drop_column('tickets', 'is_checkin_restricted')
    op.drop_column('tickets', 'auto_checkin_enabled')

The next code change was required in the ticket API schema. The change was essentially the same as the one added in the model file – just these 2 new fields were added:

class TicketSchemaPublic(SoftDeletionSchema):
    ...
    id = fields.Str(dump_only=True)
    name = fields.Str(required=True)
    ...
    is_checkin_restricted = fields.Boolean(default=True)  # <--
    auto_checkin_enabled = fields.Boolean(default=False)  # <--
    event = Relationship(attribute='event',
    self_view='v1.ticket_event',
    self_view_kwargs={'id': '<id>'},
    related_view='v1.event_detail',
    related_view_kwargs={'ticket_id': '<id>'},
    schema='EventSchemaPublic',
    type_='event')
    ...

Now all that remained were changes in the API documentation, which were made accordingly. This completed the addition of these two checkin attributes in the ticket API, and eventually made way to the orga app. And, these can be requested as usual by the front-end and user app as well.


Resources and Links:

Continue ReadingAdding Check-in Attributes to Tickets

Adding Port Specification for Static File URLs in Open Event Server

Until now, static files stored locally on Open Event server did not have port specification in their URLs. This opened the door for problems while consuming local APIs. This would have created inconsistencies, if two server processes were being served on the same machine but at different ports. In this blog post, I will explain my approach towards solving this problem, and describe code snippets to demonstrate the changes I made in the Open Event Server codebase.

The first part in this process involved finding the source of the bug. For this, my open-source integrated development environment, Microsoft Visual Studio Code turned out to be especially useful. It allowed me to jump from function calls to function definitions quickly:

Screen Shot 2018-08-10 at 12.29.01 PM

I started at events.py and jumped all the way to storage.py, where I finally found out the source of this bug, in upload_local() function:

def upload_local(uploaded_file, key, **kwargs):
    """
    Uploads file locally. Base dir - static/media/
    """
    filename = secure_filename(uploaded_file.filename)
    file_relative_path = 'static/media/' + key + '/' + generate_hash(key) + '/' + filename
    file_path = app.config['BASE_DIR'] + '/' + file_relative_path
    dir_path = file_path.rsplit('/', 1)[0]
    # delete current
    try:
        rmtree(dir_path)
    except OSError:
        pass
    # create dirs
    if not os.path.isdir(dir_path):
        os.makedirs(dir_path)
        uploaded_file.save(file_path)
        file_relative_path = '/' + file_relative_path
    if get_settings()['static_domain']:
        return get_settings()['static_domain'] + \
    file_relative_path.replace('/static', '')
    url = urlparse(request.url)
    return url.scheme + '://' + url.hostname + file_relative_path

Look closely at the return statement:

return url.scheme + '://' + url.hostname + file_relative_path

Bingo! This is the source of our bug. A straightforward solution is to simply concatenate the port number in between, but that will make this one-liner look clumsy – unreadable and un-pythonic. We therefore use Python string formatting:

return '{scheme}://{hostname}:{port}{file_relative_path}'.format(
scheme=url.scheme, hostname=url.hostname, port=url.port,
file_relative_path=file_relative_path)

But this statement isn’t perfect. There’s an edge case that might give unexpected URL. If the port isn’t originally specified, Python’s string formatting heuristic will substitute url.port with None. This will result in a URL like http://localhost:None/some/file_path.jpg, which is obviously something we don’t desire. We therefore append a call to Python’s string replace() method: replace(‘:None’, ”)

The resulting return statement now looks like the following:

return '{scheme}://{hostname}:{port}{file_relative_path}'.format(
scheme=url.scheme, hostname=url.hostname, port=url.port,
file_relative_path=file_relative_path).replace(':None', '')

This should fix the problem. But that’s not enough. We need to ensure that our project adapts well with the change we made. We check this by running the project tests locally:

$ nosetests tests/unittests

Unfortunately, the tests fail with the following traceback:

======================================================================
ERROR: test_create_save_image_sizes (tests.unittests.api.helpers.test_files.TestFilesHelperValidation)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/open-event-server/tests/unittests/api/helpers/test_files.py", line 138, in test_create_save_image_sizes
resized_width_large, _ = self.getsizes(resized_image_file_large)
File "/open-event-server/tests/unittests/api/helpers/test_files.py", line 22, in getsizes
im = Image.open(file)
File "/usr/local/lib/python3.6/site-packages/PIL/Image.py", line 2312, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: '/open-event-server:5000/static/media/events/53b8f572-5408-40bf-af97-6e9b3922631d/large/UFNNeW5FRF/5980ede1-d79b-4907-bbd5-17511eee5903.jpg'

It’s evident from this traceback that the code in our test framework is not converting the image url to file path correctly. The port specification part is working fine, but it should not affect file names, they should be independent of port number. The files saved originally do not have port specified in their name, but the code in test framework is expecting port to be involved, hence the above error.

Using the traceback, I went to the code in the test framework where this problem occurred:

def test_create_save_image_sizes(self):
       with app.test_request_context():
           image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'

           image_sizes_type = "event"
           width_large = 1300
           width_thumbnail = 500
           width_icon = 75
           image_sizes = create_save_image_sizes(image_url_test, image_sizes_type)

           resized_image_url = image_sizes['original_image_url']
           resized_image_url_large = image_sizes['large_image_url']
           resized_image_url_thumbnail = image_sizes['thumbnail_image_url']
           resized_image_url_icon = image_sizes['icon_image_url']

           resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
           resized_image_file_large = app.config.get('BASE_DIR') + resized_image_url_large.split('/localhost')[1]
           resized_image_file_thumbnail = app.config.get('BASE_DIR') + resized_image_url_thumbnail.split('/localhost')[1]
           resized_image_file_icon = app.config.get('BASE_DIR') + resized_image_url_icon.split('/localhost')[1]

           resized_width_large, _ = self.getsizes(resized_image_file_large)
           resized_width_thumbnail, _ = self.getsizes(resized_image_file_thumbnail)
           resized_width_icon, _ = self.getsizes(resized_image_file_icon)

           self.assertTrue(os.path.exists(resized_image_file))
           self.assertEqual(resized_width_large, width_large)
           self.assertEqual(resized_width_thumbnail, width_thumbnail)
           self.assertEqual(resized_width_icon, width_icon)

 

Obviously, resized_image_url.split(‘/localhost’)[1] will involve port number. So we have to change this line. But this means we also have to change the subsequent lines involving thumbnail, icon and large images. Instead of stripping the port for each of these, we can simply do this collectively at an earlier stage. So we redefine the image_sizes dictionary after the create_save_image_sizes() function call:

image_sizes = {
url_name: urlparse(image_sizes[url_name]).path
for url_name in image_sizes
} # Now file names don't contain port (this gives relative urls).

Now we can simplify the lines each of which earlier required port-stripping code:

resized_image_file = app.config.get('BASE_DIR') + resized_image_url
resized_image_file_large = app.config.get('BASE_DIR') + resized_image_url_large
resized_image_file_thumbnail = app.config.get('BASE_DIR') + resized_image_url_thumbnail
resized_image_file_icon = app.config.get('BASE_DIR') + resized_image_url_icon

We now do a similar modification in test_create_save_resized_image() test method as it also involves URL to file path conversion. We break the line

resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]

to 2 lines:

resized_image_path = urlparse(resized_image_url).path
resized_image_file = app.config.get('BASE_DIR') + resized_image_path

Now let’s run the tests (which failed earlier) again:

Screen Shot 2018-08-10 at 12.29.15 PM.pngFinally, the tests pass without errors! Now, we can add some extra convenience functionality: we can also strip the port when it corresponds with the protocol we’re using. For example, if we’re using https protocol, then we need not specify the port if it is 443, as 443 corresponds to that protocol. We can add this functionality by creating a mapping of such correspondence and checking for it before generating the URL. To do this, we now go back to storage.py and add the following:

SCHEMES = {80: 'http', 443: 'https'}

And add

# No need to specify scheme-corresponding port
port = url.port
if port and url.scheme == SCHEMES.get(url.port, None):
    port = None

just before

return '{scheme}://{hostname}:{port}{file_relative_path}'.format(
scheme=url.scheme, hostname=url.hostname, port=port,
file_relative_path=file_relative_path).replace(':None', '')

And that finishes our work! The tests again pass successfully, plus on top of that we have this new functionality of mentioning ports only when they don’t correspond with the URL scheme!


Resources:

Continue ReadingAdding Port Specification for Static File URLs in Open Event Server

Patching an Attribute Type Across a Flask App

Recently, it was discovered by a contributor that the rating attribute for event feedbacks in Open Event was of type String. The type was incorrect, indeed. After a discussion, developers came concluded that it should be of type Float. In this post, I explain how to perform this simple migration task of changing a data type across a typical Flask app’s stack.

To begin this change, we first, we modify the database model. The model file for feedbacks (feedback.py) looks like the following:

from app.models import db


class Feedback(db.Model):
"""Feedback model class"""
__tablename__ = 'feedback'
id = db.Column(db.Integer, primary_key=True)
rating = db.Column(db.String, nullable=False)  # ←-- should be float
comment = db.Column(db.String, nullable=True)
user_id = db.Column(db.Integer,
db.ForeignKey('users.id', ondelete='CASCADE'))
event_id = db.Column(db.Integer,
db.ForeignKey('events.id', ondelete='CASCADE'))

def __init__(self, rating=None, comment=None, event_id=None,                                        user_id=None):
self.rating = rating  # ←-- cast here for safety
self.comment = comment
self.event_id = event_id
self.user_id = user_id





The change here is quite straightforward, and spans just 2 lines:

rating = db.Column(db.Float, nullable=False)

and

self.rating = float(rating)

We now perform the database migration using a couple of manage.py commands on the terminal. This file is different for different projects, but the migration commands essentially look the same. For Open Event Server, the manage.py file is at the root of the project directory (as is conventional). After cd’ing to the root, we execute the following commands:

$ python manage.py db migrate

and then

$ python manage.py db upgrade

These commands update our Open Event database so that the rating is now stored as a Float. However, if we execute these commands one after the other, we note that an exception is thrown:

sqlalchemy.exc.ProgrammingError: column "rating" cannot be cast automatically to type float
HINT:  Specify a USING expression to perform the conversion.
'ALTER TABLE feedback ALTER COLUMN rating TYPE FLOAT USING rating::double precision'

This happens because the migration code is ambiguous about what precision to use after converting to type float. It hints us to utilize the USING clause of PostgreSQL to do that. We accomplish this manually by using the psql client to connect to our database and command it the type change:

$ psql oevent
psql (10.1)
Type "help" for help.

oevent=# ALTER TABLE feedback ALTER COLUMN rating TYPE FLOAT USING rating::double precision

We now exit the psql shell and run the above migration commands again. We see that the migration commands pass successfully this time, and a migration file is generated. For our migration, the file looks like the following:

from alembic import op
import sqlalchemy as sa


# These values would be different for your migrations.
revision = '194a5a2a44ef'

down_revision = '4cac94c86047'


def upgrade():
op.alter_column('feedback', 'rating',
existing_type=sa.VARCHAR(),
type_=sa.Float(),
existing_nullable=False)


def downgrade():
op.alter_column('feedback', 'rating',
existing_type=sa.Float(),
type_=sa.VARCHAR(),
existing_nullable=False)

This is an auto-generated file (built by the database migration tool Alembic) and we need to specify the extra commands we used while migrating our database. Since we did use an extra command to specify the precision, we need to add it here. PostgreSQL USING clause can be added to alembic migration files via the postgresql_using keyword argument. Thus, the edited version of the upgrade function looks like the following:

def upgrade():
op.alter_column('feedback', 'rating',
existing_type=sa.VARCHAR(),
type_=sa.Float(),
existing_nullable=False,
postgresql_using='rating::double precision')

This completes our work on database migration. Migration files are useful for a variety of purposes – they allow us to easily get to a previous database state, or a new database state as suggested by a project collaborator. Just like git, they allow for easy version control and collaboration.

We didn’t finish this work after database migration. We also decided to impose limits on the rating value. We concluded that 0-5 would be a good range for rating. Furthermore, we also decided to round off the rating value to the “nearest 0.5”, so if the input rating is 2.3, it becomes 2.5. Also, if it is 4.1, it becomes 4.0. This was decided because such values are conventional for ratings across numerous web and mobile apps. So this will hopefully enable easier adoption for new users.

For the validation part, marshmallow came to rescue. It is a simple object serialization and deserialization tool for Python. So it basically allows to convert complex Python objects to JSON data for communicating over HTTP and vice-versa, among other things. It also facilitates pre-processing input data and therefore, allows clean validation of payloads. In our case, marshmallow was specifically used to validate the range of the rating attribute of feedbacks. The original feedbacks schema file looked like the following:

from marshmallow_jsonapi import fields
from marshmallow_jsonapi.flask import Schema, Relationship

from app.api.helpers.utilities import dasherize


class FeedbackSchema(Schema):
"""
Api schema for Feedback Model
"""
class Meta:
"""
Meta class for Feedback Api Schema
"""
type_ = 'feedback'
self_view = 'v1.feedback_detail'
self_view_kwargs = {'id': '<id>'}
inflect = dasherize

id = fields.Str(dump_only=True)
rating = fields.Str(required=True)  # ← need to validate this
comment = fields.Str(required=False)
event = Relationship(attribute='event',
self_view='v1.feedback_event',
self_view_kwargs={'id': '<id>'},
related_view='v1.event_detail',
related_view_kwargs={'feedback_id': '<id>'},
schema='EventSchemaPublic',
type_='event')




To validate the rating attribute, we use marshmallow’s Range class:

from marshmallow.validate import Range

Now we change the line

rating = fields.Str(required=True)

to

rating = fields.Float(required=True, validate=Range(min=0, max=5))

So with marshmallow, just about 2 lines of work implements rating validation for us!

After the validation part, what’s remaining is the rounding-off business logic. This is simple mathematics, and for getting to the “nearest 0.5” number, the formula goes as follows:

rating * 2 --> round off --> divide the result by 2

We will use Python’s built-in function (BIF) to accomplish this. To implement the business logic, we go back to the feedback model class and modify its constructor. Before this type change, the constructor looked like the following:

def __init__(self, rating=None, comment=None, event_id=None, user_id=None):
self.rating = rating
self.comment = comment
self.event_id = event_id
self.user_id = user_id

We change this by first converting the input rating to float, rounding it off and then finally assigning the result to feedback’s rating attribute. The new constructor is shown below:

def __init__(self, rating=None, comment=None, event_id=None, user_id=None):
rating = float(rating)
self.rating = round(rating*2, 0) / 2  # Rounds to nearest 0.5

self.comment = comment
self.event_id = event_id
self.user_id = user_id

This completes the rounding-off part and ultimately concludes rating’s type change from String to Float. We saw how a simple high-level type change requires editing code across multiple files and the use of different tools in between. In doing so, we thus also learned the utility of alembic and marshmallow in database migration and data validation, respectively.


Resources

Continue ReadingPatching an Attribute Type Across a Flask App

Deploying a Postgres-Based Open Event Server to Kubernetes

In this post, I will walk you through deploying the Open Event Server on Kubernetes, hosted on Google Cloud Platform’s Compute Engine. You’ll be needing a Google account for this, so create one if you don’t have one.

First, I cd into the root of our project’s Git repository. Now I need to create a Dockerfile. I will use Docker to package our project into a nice image which can be then be “pushed” to Google Cloud Platform. A Dockerfile is essentially a text doc which simply contains the commands required to assemble an image. For more details on how to write one for your project specifically, check out Docker docs. For Open Event Server, the Dockerfile looks like the following:

FROM python:3-slim

ENV INSTALL_PATH /open_event
RUN mkdir -p $INSTALL_PATH

WORKDIR $INSTALL_PATH

# apt-get update and update some packages
RUN apt-get update && apt-get install -y wget git ca-certificates curl && update-ca-certificates && apt-get clean -y


# install deps
RUN apt-get install -y --no-install-recommends build-essential python-dev libpq-dev libevent-dev libmagic-dev && apt-get clean -y

# copy just requirements
COPY requirements.txt requirements.txt
COPY requirements requirements

# install requirements
RUN pip install --no-cache-dir -r requirements.txt 
RUN pip install eventlet

# copy remaining files
COPY . .

CMD bash scripts/docker_run.sh

These commands simply install the dependencies and set up the environment for our project. The final CMD command is for running our project, which, in our case, is a server.

After our Dockerfile is configured, I go to Google Cloud Platform’s console and create a new project:

Screen Shot 2018-08-09 at 7.37.34 PM

Once I enter the product name and other details, I enable billing in order to use Google’s cloud resources. A credit card is required to set up a billing account, but Google doesn’t charge any money for that. Also, one of the perks of being a part of FOSSASIA was that I had about $3000 in Google Cloud credits! Once billing is enabled, I then enable the Container Engine API. It is required to support Kubernetes on Google Compute Engine. Next step is to install Google Cloud SDK. Once that is done, I run the following command to install Kubernetes CLI tool:

gcloud components install kubectl

Then I configure the Google Cloud Project Zone via the following command:

gcloud config set compute/zone us-west1-a

Now I will create a disk (for storing our code and data) as well as a temporary instance for formatting that disk:

gcloud compute disks create pg-data-disk --size 1GB
gcloud compute instances create pg-disk-formatter
gcloud compute instances attach-disk pg-disk-formatter --disk pg-data-disk

Once the disk is attached to our instance, I SSH into it and list the available disks:

gcloud compute ssh "pg-disk-formatter"

Now, ls the available disks:

ls /dev/disk/by-id

This will list multiple disks (as shown in the Terminal window below), but the one I want to format is “google-persistent-disk-1”.

Screen Shot 2018-08-09 at 7.48.39 PM.png

Now I format that disk via the following command:

sudo mkfs.ext4 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/disk/by-id/google-persistent-disk-1

Finally, after the formatting is done, I exit the SSH session and detach the disk from the instance:

gcloud compute instances detach-disk pg-disk-formatter --disk pg-data-disk

Now I create a Kubernetes cluster and get its credentials (for later use) via gcloud CLI:

gcloud container clusters create opev-cluster
gcloud container clusters get-credentials opev-cluster

I can additionally bind our deployment to a domain name! If you already have a domain name, you can use the IP reserved below as an A record of your domain’s DNS Zone. Otherwise, get a free domain at freenom.com and do the same with that domain. This static external IP address is reserved in the following manner:

gcloud compute addresses create testip --region us-west1

It’ll be useful if I note down this IP for further reference. I now add this IP (and our domain name if I used one) in Kubernetes configuration files (for Open Event Server, these were specific files for services like nginx and PostgreSQL):

#
# Deployment of the API Server and celery worker
#
kind: Deployment
apiVersion: apps/v1beta1
metadata:
name: api-server
namespace: web
spec:
replicas: 1
template:
metadata:
labels:
app: api-server
spec:
volumes:
- name: data-store
emptyDir: {}
containers:
- name: api-server
image: eventyay/nextgen-api-server:latest
command: ["/bin/sh","-c"]
args: ["./kubernetes/run.sh"]
livenessProbe:
httpGet:
path: /health-check/
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 3
ports:
- containerPort: 8080
protocol: TCP
envFrom:
- configMapRef:
name: api-server
env:
- name: C_FORCE_ROOT
value: 'true'
- name: PYTHONUNBUFFERED
value: 'TRUE'
- name: DEPLOYMENT
value: 'api'
- name: FORCE_SSL
value: 'yes'
volumeMounts:
- mountPath: /opev/open_event/static/uploads/
name: data-store
- name: celery
image: eventyay/nextgen-api-server:latest
command: ["/bin/sh","-c"]
args: ["./kubernetes/run.sh"]
envFrom:
- configMapRef:
name: api-server
env:
- name: C_FORCE_ROOT
value: 'true'
- name: DEPLOYMENT
value: 'celery'
volumeMounts:
- mountPath: /opev/open_event/static/uploads/
name: data-store
restartPolicy: Always

These files will look similar for your specific project as well. Finally, I run the deployment script provided for Open Event Server to deploy with defined configuration:

./kubernetes/deploy.sh create all

This is the specific deployment script for this project; for your app, this will be a bunch of simple kubectl create * commands specific to your app’s context. Once this script completes, our app is deployed! After a few minutes, I can see it live at the domain I provided above!

I can also see our project’s statistics and set many other attributes at Google Cloud Platform’s console:

Screen Shot 2018-08-09 at 7.49.42 PM.png

And that’s it for this post! For more details on deploying Docker-based containers on Kubernetes clusters hosted by Google, visit Google Cloud Platform’s documentation.

Resources

Continue ReadingDeploying a Postgres-Based Open Event Server to Kubernetes