Migrating to Next generation of Open Event from Legacy

Screen Shot 2017-07-18 at 2.03.19 PM.png

This blog article will illustrate how, after significant refactors, eventyay.com was switched from open-event-legacy to the new version of open-event which has a decoupled open-event-API-server and frontend. We will discuss this switch from two aspects – the database migration and the DevOps logistic, as we moved from google cloud to a  more economically feasible platform – hetzner cloud.

Downloading a copy of the legacy database from Kubernetes pod on the google cloud platform

The first step, was to obtain a copy of the database being used in production. The database was stored inside a pod named Postgres of the Kubernetes cluster.

gcloud login
gcloud container clusters get-credentials vintage-cluster --zone us-west1-a --project eventyay

These commands initialized and authenticated the gcloud sdk with the project eventyay.

Next, to gain bash access to the postgres pod, exec of kubectl CLI was used.

kubectl exec -it postgres — /bin/bash

The database in both the new and legacy versions was postgresSql. Using the pg_dump functionality, a database can be dumped into a transferable file

kubectl exec -it postgres -- /bin/bash

However this file still resides on the kubernetes pod’s local storage  itself. The cp utility of kubectl CLI comes in handy to copy that file from pod to local storage. 

kubectl cp default/postgres:legacy-24-03-2019.pgsql ~/Downloads/

This command transfers the file to local storage. Now we have a database we can begin to refactor, for this we first need to import it into the postgres instance on the local machine, to take a peek inside the schema.

Psql -U postgres
Create database legacy_24_03_2019
Create database legacy_24_03_2019_reference
postgres legacy_24_03_2019 < ~/Downloads/legacy-24-03-2019.pgsql 

These commands dump the data inside the  legacy database into the newly created database inside the Postgres instance. The schema and architecture of the new version of server is different from the legacy database, hence DB has to go through migrations.

The last migration file shared by legacy and the next gen of server is corresponding to the migration ddaa6df27340.

However, it is important to note that the migrations branched from here onwards. Hence to downgrade the databse to the last legacy migration, we need to use the migrations folder of the legacy version of eventyay and not the original migrations directory

Assuming that the migrations from the legacy server stiored in a folder called migrations-legacy

mv migrations migrations-new
mv migrations-legacy migrations
Change the config in the local .env inside open-event-server to switch to this newly created database.
9d21de792967
Ticket holder thing
python manage.py db downgrade ddaa6df27340

Then we upgrade to the latest instance of the db

Switch back the directories
(venv) Abhinavs-MacBook-Pro:open-event-server abhinav$ mv migrations migrations-legacy
(venv) Abhinavs-MacBook-Pro:open-event-server abhinav$ mv migrations-new migrations

Then we upgrade the database 

Python manage.py db upgrade

These commands migrate the database in principal, and we have a basic system with core functionality using legacy data but the database is not at all ready to be used in production yet. It is rife with bugs. The migrations file don’t cover each change, and some are outright breaking. Some have problems like comma separated values being converted into their own schemas  without any migration written for them. These issues were tackled separately.

Most significant of these was the scripts written to convert legacy comma separated data format of custom forms  into a separate schema.

Forms = [exported custom forms table of legacy]
n=2567
for (let i in forms){
//console.log(i);
a = JSON.parse(forms[i].speaker_form);
c = forms[i].event_id;
for (key in a){
//console.log(key);
mkey = key.toLowerCase().replace(/_([a-z])/,function(m){return m.toUpperCase();}).replace(/_/,'');
if(key=='name' || key=='email' || key == 'title' || key=='track'){
console.log("insert into custom_forms values("+n+","+c+", '" + mkey +"', 'speaker', true,"+ (a[key].include==1?true:false)+","+ (a[key].require==1?true:false)+","+"'text');");
}
else {
console.log("insert into custom_forms values("+n+","+c+", '" + mkey +"', 'speaker', false,"+ (a[key].include==1?true:false)+","+ (a[key].require==1?true:false)+","+"'text');");
}

n++;
}
}

Similar script had to be written for sessions

n=11987
for (let i in forms){
//console.log(i);
a = JSON.parse(forms[i].session_form);
c = forms[i].event_id;
for (key in a){
//console.log(key);
mkey = key.toLowerCase().replace(/_([a-z])/,function(m){return m.toUpperCase();}).replace(/_/,'');
if(key == 'title' || key=='track'){
console.log("insert into custom_forms values("+n+","+c+", '" + mkey +"', 'session', true,"+ (a[key].include==1?true:false)+","+ (a[key].require==1?true:false)+","+"'text');");
}
else {
console.log("insert into custom_forms values("+n+","+c+", '" + mkey +"', 'session', false,"+ (a[key].include==1?true:false)+","+ (a[key].require==1?true:false)+","+"'text');");
}

n++;
}
}

These scripts returned the sql queries required to insert these custom forms to corresponding tables. After some more miscellaneous fixes like this one, the migrations was fully complete and ready for production.

Resources 

CosmicCoder96

GSOC 17 @ FOSSASIA | Full Stack Developer | Swimmer

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.