Implementing Endpoint to Resend Email Verification

Earlier, when a user registered via Open Event Frontend, s/he received a verification link via email to confirm their account. However, this was not enough in the long-term. If the confirmation link expired, or for some reasons the verification mail got deleted on the user side, there was no functionality to resend the verification email, which prevented the user from getting fully registered. Although the front-end already showed the option to resend the verification link, there was no support from the server to do that, yet. So it was decided that a separate endpoint should be implemented to allow re-sending the verification link to a user. /resend-verification-email was an endpoint that would fit this action. So we decided to go with it and create a route in `auth.py` file, which was the appropriate place for this feature to reside. First step was to do the necessary imports and then definition: from app.api.helpers.mail import send_email_confirmation from app.models.mail import USER_REGISTER_WITH_PASSWORD ... ... @auth_routes.route('/resend-verification-email', methods=['POST']) def resend_verification_email(): ... Now we safely fetch the email mentioned in the request and then search the database for the user corresponding to that email: def resend_verification_email(): try: email = request.json['data']['email'] except TypeError: return BadRequestError({'source': ''}, 'Bad Request Error').respond() try: user = User.query.filter_by(email=email).one() except NoResultFound: return UnprocessableEntityError( {'source': ''}, 'User with email: ' + email + ' not found.').respond() else: ... Once a user has been identified in the database, we proceed further and create an essentially unique hash for the user verification. This hash is in turn used to generate a verification link that is then ready to be sent via email to the user: else: serializer = get_serializer() hash_ = str(base64.b64encode(str(serializer.dumps( [user.email, str_generator()])).encode()), 'utf-8') link = make_frontend_url( '/email/verify'.format(id=user.id), {'token': hash_}) Finally, the email is sent: send_email_with_action( user, USER_REGISTER_WITH_PASSWORD, app_name=get_settings()['app_name'], email=user.email) if not send_email_confirmation(user.email, link): return make_response(jsonify(message="Some error occured"), 500) return make_response(jsonify(message="Verification email resent"), 200) But this was not enough. When the endpoint was tested, it was found that actual emails were not being delivered, even after correctly configuring the email settings locally. So, after a bit of debugging, it was found that the settings, which were using Sendgrid to send emails, were using a deprecated Sendgrid API endpoint. A separate email function is used to send emails via Sendgrid and it contained an old endpoint that was no longer recommended by Sendgrid: @celery.task(name='send.email.post') def send_email_task(payload, headers): requests.post( "https://api.sendgrid.com/api/mail.send.json", data=payload, headers=headers ) The new endpoint, as per Sendgrid’s documentation, is: https://api.sendgrid.com/v3/mail/send But this was not the only change required. Sendgrid had also modified the structure of requests they accepted, and the new structure was different from the existing one that was used in the server. Following is the new structure: '{"personalizations": [{"to": [{"email": "example@example.com"}]}],"from": {"email": "example@example.com"},"subject": "Hello, World!","content": [{"type": "text/plain", "value": "Heya!"}]}' The header structure was also changed, so the structure in the server was also updated to headers = { "Authorization": ("Bearer " + key), "Content-Type": "application/json" } The Sendgrid function (which is executed as a Celery task) was modified as follows, to incorporate the changes…

Continue ReadingImplementing Endpoint to Resend Email Verification

Adding Check-in Attributes to Tickets

Recently, it was decided by the Open Event Orga App team that the event ticket API response from Open Event Server should have two additional attributes for specifying event check-in access. At first sight, it seemed that adding these options will only require changes in the orga app, but it turned out that the entire Ticket API from the server will need this addition. Implementing these attributes turned out to be quite straightforward. Specifically, the fields to be added were boolean is_checkin_restricted and auto_checkin_enabled. By default, checkin is not automatic and is restricted. Therefore, the default values for these fields were chosen to be True and False respectively. To add them, the ticket model file was changed first - due to the addition of these two columns: class Ticket(SoftDeletionModel): ... is_checkin_restricted = db.Column(db.Boolean) # <-- auto_checkin_enabled = db.Column(db.Boolean) # <-- ... def __init__(self, name=None, event_id=None, ... is_checkin_restricted=True, auto_checkin_enabled=False): self.name = name ... self.is_checkin_restricted = is_checkin_restricted self.auto_checkin_enabled = auto_checkin_enabled ... Since the ticket database model was updated, a migration had to be performed. Following shell commands (at the open event server project root) did the migration and database update and a migration file was then generated: $ python manage.py db migrate $ python manage.py db upgrade Here’s the generated migration file: from alembic import op import sqlalchemy as sa revision = '6440077182f0' down_revision = 'eaa029ebb260' def upgrade(): op.add_column('tickets', sa.Column('auto_checkin_enabled', sa.Boolean(), nullable=True)) op.add_column('tickets', sa.Column('is_checkin_restricted', sa.Boolean(), nullable=True)) def downgrade(): op.drop_column('tickets', 'is_checkin_restricted') op.drop_column('tickets', 'auto_checkin_enabled') The next code change was required in the ticket API schema. The change was essentially the same as the one added in the model file - just these 2 new fields were added: class TicketSchemaPublic(SoftDeletionSchema): ... id = fields.Str(dump_only=True) name = fields.Str(required=True) ... is_checkin_restricted = fields.Boolean(default=True) # <-- auto_checkin_enabled = fields.Boolean(default=False) # <-- event = Relationship(attribute='event', self_view='v1.ticket_event', self_view_kwargs={'id': '<id>'}, related_view='v1.event_detail', related_view_kwargs={'ticket_id': '<id>'}, schema='EventSchemaPublic', type_='event') ... Now all that remained were changes in the API documentation, which were made accordingly. This completed the addition of these two checkin attributes in the ticket API, and eventually made way to the orga app. And, these can be requested as usual by the front-end and user app as well. Resources and Links: Open Event Server pull request for checkin attributes Alembic documentation Marshmallow API Schema Reference

Continue ReadingAdding Check-in Attributes to Tickets

Adding Port Specification for Static File URLs in Open Event Server

Until now, static files stored locally on Open Event server did not have port specification in their URLs. This opened the door for problems while consuming local APIs. This would have created inconsistencies, if two server processes were being served on the same machine but at different ports. In this blog post, I will explain my approach towards solving this problem, and describe code snippets to demonstrate the changes I made in the Open Event Server codebase. The first part in this process involved finding the source of the bug. For this, my open-source integrated development environment, Microsoft Visual Studio Code turned out to be especially useful. It allowed me to jump from function calls to function definitions quickly: I started at events.py and jumped all the way to storage.py, where I finally found out the source of this bug, in upload_local() function: def upload_local(uploaded_file, key, **kwargs): """ Uploads file locally. Base dir - static/media/ """ filename = secure_filename(uploaded_file.filename) file_relative_path = 'static/media/' + key + '/' + generate_hash(key) + '/' + filename file_path = app.config['BASE_DIR'] + '/' + file_relative_path dir_path = file_path.rsplit('/', 1)[0] # delete current try: rmtree(dir_path) except OSError: pass # create dirs if not os.path.isdir(dir_path): os.makedirs(dir_path) uploaded_file.save(file_path) file_relative_path = '/' + file_relative_path if get_settings()['static_domain']: return get_settings()['static_domain'] + \ file_relative_path.replace('/static', '') url = urlparse(request.url) return url.scheme + '://' + url.hostname + file_relative_path Look closely at the return statement: return url.scheme + '://' + url.hostname + file_relative_path Bingo! This is the source of our bug. A straightforward solution is to simply concatenate the port number in between, but that will make this one-liner look clumsy - unreadable and un-pythonic. We therefore use Python string formatting: return '{scheme}://{hostname}:{port}{file_relative_path}'.format( scheme=url.scheme, hostname=url.hostname, port=url.port, file_relative_path=file_relative_path) But this statement isn't perfect. There's an edge case that might give unexpected URL. If the port isn't originally specified, Python's string formatting heuristic will substitute url.port with None. This will result in a URL like http://localhost:None/some/file_path.jpg, which is obviously something we don't desire. We therefore append a call to Python's string replace() method: replace(':None', '') The resulting return statement now looks like the following: return '{scheme}://{hostname}:{port}{file_relative_path}'.format( scheme=url.scheme, hostname=url.hostname, port=url.port, file_relative_path=file_relative_path).replace(':None', '') This should fix the problem. But that’s not enough. We need to ensure that our project adapts well with the change we made. We check this by running the project tests locally: $ nosetests tests/unittests Unfortunately, the tests fail with the following traceback: ====================================================================== ERROR: test_create_save_image_sizes (tests.unittests.api.helpers.test_files.TestFilesHelperValidation) ---------------------------------------------------------------------- Traceback (most recent call last): File "/open-event-server/tests/unittests/api/helpers/test_files.py", line 138, in test_create_save_image_sizes resized_width_large, _ = self.getsizes(resized_image_file_large) File "/open-event-server/tests/unittests/api/helpers/test_files.py", line 22, in getsizes im = Image.open(file) File "/usr/local/lib/python3.6/site-packages/PIL/Image.py", line 2312, in open fp = builtins.open(filename, "rb") FileNotFoundError: [Errno 2] No such file or directory: '/open-event-server:5000/static/media/events/53b8f572-5408-40bf-af97-6e9b3922631d/large/UFNNeW5FRF/5980ede1-d79b-4907-bbd5-17511eee5903.jpg' It’s evident from this traceback that the code in our test framework is not converting the image url to file path correctly. The port specification part is working fine, but it should not affect file names, they should be independent of port number. The files saved originally do not have port specified in their name, but…

Continue ReadingAdding Port Specification for Static File URLs in Open Event Server

Patching an Attribute Type Across a Flask App

Recently, it was discovered by a contributor that the rating attribute for event feedbacks in Open Event was of type String. The type was incorrect, indeed. After a discussion, developers came concluded that it should be of type Float. In this post, I explain how to perform this simple migration task of changing a data type across a typical Flask app’s stack. To begin this change, we first, we modify the database model. The model file for feedbacks (feedback.py) looks like the following: from app.models import db class Feedback(db.Model): """Feedback model class""" __tablename__ = 'feedback' id = db.Column(db.Integer, primary_key=True) rating = db.Column(db.String, nullable=False) # ←-- should be float comment = db.Column(db.String, nullable=True) user_id = db.Column(db.Integer, db.ForeignKey('users.id', ondelete='CASCADE')) event_id = db.Column(db.Integer, db.ForeignKey('events.id', ondelete='CASCADE')) def __init__(self, rating=None, comment=None, event_id=None, user_id=None): self.rating = rating # ←-- cast here for safety self.comment = comment self.event_id = event_id self.user_id = user_id … … … The change here is quite straightforward, and spans just 2 lines: rating = db.Column(db.Float, nullable=False) and self.rating = float(rating) We now perform the database migration using a couple of manage.py commands on the terminal. This file is different for different projects, but the migration commands essentially look the same. For Open Event Server, the manage.py file is at the root of the project directory (as is conventional). After cd’ing to the root, we execute the following commands: $ python manage.py db migrate and then $ python manage.py db upgrade These commands update our Open Event database so that the rating is now stored as a Float. However, if we execute these commands one after the other, we note that an exception is thrown: sqlalchemy.exc.ProgrammingError: column "rating" cannot be cast automatically to type float HINT: Specify a USING expression to perform the conversion. 'ALTER TABLE feedback ALTER COLUMN rating TYPE FLOAT USING rating::double precision' This happens because the migration code is ambiguous about what precision to use after converting to type float. It hints us to utilize the USING clause of PostgreSQL to do that. We accomplish this manually by using the psql client to connect to our database and command it the type change: $ psql oevent psql (10.1) Type "help" for help. oevent=# ALTER TABLE feedback ALTER COLUMN rating TYPE FLOAT USING rating::double precision We now exit the psql shell and run the above migration commands again. We see that the migration commands pass successfully this time, and a migration file is generated. For our migration, the file looks like the following: from alembic import op import sqlalchemy as sa # These values would be different for your migrations. revision = '194a5a2a44ef' down_revision = '4cac94c86047' def upgrade(): op.alter_column('feedback', 'rating', existing_type=sa.VARCHAR(), type_=sa.Float(), existing_nullable=False) def downgrade(): op.alter_column('feedback', 'rating', existing_type=sa.Float(), type_=sa.VARCHAR(), existing_nullable=False) This is an auto-generated file (built by the database migration tool Alembic) and we need to specify the extra commands we used while migrating our database. Since we did use an extra command to specify the precision, we need to add it here. PostgreSQL USING clause can be added to alembic…

Continue ReadingPatching an Attribute Type Across a Flask App

Deploying a Postgres-Based Open Event Server to Kubernetes

In this post, I will walk you through deploying the Open Event Server on Kubernetes, hosted on Google Cloud Platform’s Compute Engine. You’ll be needing a Google account for this, so create one if you don’t have one. First, I cd into the root of our project’s Git repository. Now I need to create a Dockerfile. I will use Docker to package our project into a nice image which can be then be “pushed” to Google Cloud Platform. A Dockerfile is essentially a text doc which simply contains the commands required to assemble an image. For more details on how to write one for your project specifically, check out Docker docs. For Open Event Server, the Dockerfile looks like the following: FROM python:3-slim ENV INSTALL_PATH /open_event RUN mkdir -p $INSTALL_PATH WORKDIR $INSTALL_PATH # apt-get update and update some packages RUN apt-get update && apt-get install -y wget git ca-certificates curl && update-ca-certificates && apt-get clean -y # install deps RUN apt-get install -y --no-install-recommends build-essential python-dev libpq-dev libevent-dev libmagic-dev && apt-get clean -y # copy just requirements COPY requirements.txt requirements.txt COPY requirements requirements # install requirements RUN pip install --no-cache-dir -r requirements.txt RUN pip install eventlet # copy remaining files COPY . . CMD bash scripts/docker_run.sh These commands simply install the dependencies and set up the environment for our project. The final CMD command is for running our project, which, in our case, is a server. After our Dockerfile is configured, I go to Google Cloud Platform’s console and create a new project: Once I enter the product name and other details, I enable billing in order to use Google’s cloud resources. A credit card is required to set up a billing account, but Google doesn’t charge any money for that. Also, one of the perks of being a part of FOSSASIA was that I had about $3000 in Google Cloud credits! Once billing is enabled, I then enable the Container Engine API. It is required to support Kubernetes on Google Compute Engine. Next step is to install Google Cloud SDK. Once that is done, I run the following command to install Kubernetes CLI tool: gcloud components install kubectl Then I configure the Google Cloud Project Zone via the following command: gcloud config set compute/zone us-west1-a Now I will create a disk (for storing our code and data) as well as a temporary instance for formatting that disk: gcloud compute disks create pg-data-disk --size 1GB gcloud compute instances create pg-disk-formatter gcloud compute instances attach-disk pg-disk-formatter --disk pg-data-disk Once the disk is attached to our instance, I SSH into it and list the available disks: gcloud compute ssh "pg-disk-formatter" Now, ls the available disks: ls /dev/disk/by-id This will list multiple disks (as shown in the Terminal window below), but the one I want to format is "google-persistent-disk-1". Now I format that disk via the following command: sudo mkfs.ext4 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/disk/by-id/google-persistent-disk-1 Finally, after the formatting is done, I exit the SSH session and detach the disk from the instance: gcloud…

Continue ReadingDeploying a Postgres-Based Open Event Server to Kubernetes