How to get the database of a running container from gcloud kubernetes

Dumping Databases with Kubernetes from Google Cloud We have the eventyay version 1 currently running on Google Cloud, inside gcloud, I did not find documentation on this from open event, so what must be done is, we must first understand how this system is set up. This is probably easier there is already some overall knowledge of how kubernetes or gcloud actually works, but of course it takes time if there isn’t great knowledge on this topic. This is the google cloud interface There are many buttons many things that can be done, so while exploring with them.. There is a console button! When clicked on this button, this appears You get a fully integrated terminal emulator with access to a virtual linux environment where your project is, and this is actually super because you get full control of your project and what you can do. It is no longer needed to learn how to navigate through the google cloud GUI and configure everything, because you already have your own shell! So the first things to care about is know what commands are available that help us for our purpose there are 2 important commands, and those are, “kubectl” and “gcloud” shell commands. Right now gcloud commands won’t be particularly useful, what we need to know right now is we need to know the pods that are being run, right now. (Pods are little virtual machines, like the containers in Docker) Awesome, we can find a list of the pods that we have available to view, I assume that web-1299653859-mq59r is the pod where open event is stored We can confirm this by doing  kubectl get services, we then get the ports in which the services are open, we see that web is port 8080 so, that’s probably were open-event is Through kubectl exec -it web-1299653859-mq59r -- /bin/bash we can get inside the container! Great so now that we’re on the open event container, we try to look up how to connect to the database, the environment variables just show the ports and the ip address but the configuration file where the password actually is it doesn’t seem it can be found in plain view.. What to do? Getting inside postgres container directly After some time digging around the container, I give up trying to look for the username and password and just get the idea to go to the container directly. So there you go. A list of the databases running on postgres. We now use pg_dump to dump the database. kubectl exec -it postgres -- su postgres -c "pg_dump opev" We pipe the output to a file. Then we can download it from here, and that’s how to get the dump from the database. References: https://kubernetes.io/docs/reference/kubectl/cheatsheet/ https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/

Continue ReadingHow to get the database of a running container from gcloud kubernetes

Ember js as a web development framework.

Ember js as a web development framework. Open Event on the frontend side uses a framework called Ember-js, and it is very necessary to understand it, if one has to contribute to the frontend, otherwise it will be extremely hard considering how big and complex ember can actually be, this is a collection of quick facts I’ve collected about ember.js, knowing this might also aid you in contributing to open-event even faster! Ember.js is a very opinionated web framework that uses a lot of the new ES6/ES7 features that provides a MVC workflow for devs to create websites in a more organized way, and introduces many new concepts, into designing user interfaces. Ember.js is better to create webapps, as it is to create a blog for example. Since it uses the MVC perspective of creating user interfaces. Another point I can mention is that ember.js is very non-intuitive, it has a big learning curve, and that means, editing ember.js webapps has an investment cost for every person that wants to edit them. This is independently if they already have knowledge of JavaScript, HTML, CSS, the reason is, ember.js has its own internal language, so if you read the output of the code it generates, you’re obliged to learn how the entire system works. It works not very differently from a node.js application converted into web  (browserify), the HTML code the user agent downloads is very small, it downloads two javascript files, one just defines functions with all your user data, (your routes, templates, compiled handlebars files), and another one that controls the other, it has the virtual machine interpreter, and boots the website up so to say. And that’s mainly one of the reasons why big websites that use ember.js take so much to load. Because every time you enter the website you download the entire website in a single javascript file, (of course once it is cached this should be done quicker, so the first time will usually last very long compared to others), it isn’t surprising because most modern web frameworks actually do this. I think it’s very important to know some details about how this specific framework works. As for the user side (the developer) is pretty much wonderful. You have an application.hbs file where your main app (the main user interface) is there, and you can just use commands to generate routes, when you add routes and they’re not assigned to any parent, they’re added as “children” of the main app. Before we get lost in the new Jargon, let me explain quickly what a route is, when you’re visiting a website you have in your URL a path of the website to whatever current file you’re looking, every new URL you visit might get added on your history if you have it enabled, and you’re allowed to go back and forth in history!  Every address is also a part of the website, a website is traditionally organized as a file system structure, there are directories, and there…

Continue ReadingEmber js as a web development framework.

open event server installation and docker-compose

Installing Open Event Server  I’m going to walk you through the process of installing Open Event Server and possible issues you might encounter, this will probably be helpful if you’re stuck with it, but most importantly in building some sort of installer as I’ve stated numerous times before. Requirements for its installation Debian based distro or unix-like with aptitude package manager Enough memory and storage, (depends on whether you put the database on your system or not) Knowledge of UNIX-like systems First, here are the commands, I tried to make them as easy as a copy-paste but it probably won’t work depending on it, so I’ll talk to you about what each thing does apt-get update apt-get install -y locales git sudo locale-gen en_US.UTF-8 if [ -z "$LANG" ];then export LANG=en_US.UTF-8;fi export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true git clone https://github.com/kreijstal-contributions/open-event-api.git --depth=1 cd open-event-server #libffi6 libffi-dev apt-get install -y redis-server tmux libssl-dev vim postgresql postgresql-contrib build-essential python3-dev libpq-dev libevent-dev libmagic-dev python3-pip wget ca-certificates python3-venv curl && update-ca-certificates && apt-get clean -y service redis-server start service postgresql start cp .env.example .env #sed -i -e 's/SERVER_NAME/#SERVER_NAME/g' .env #The above was used because before the repository came with an annoying SERVER_NAME which only listened to localhost, so if you accessed with 127.0.0.1, it didn’t work #pip install virtualenv #python 2.7.14 #virtualenv . python3 -m venv . #pip install pip==9.0.1 source bin/activate #pip3 install -r requirements/tests.txt pip3 install --no-cache-dir -r requirements.txt #pip install eventlet cat << EOF | su - postgres -c psql -- Create the database user: CREATE USER john WITH PASSWORD 'start'; -- Create the database: CREATE DATABASE oevent OWNER=john LC_COLLATE='en_US.utf8' LC_CTYPE='en_US.utf8' ENCODING='UTF8' TEMPLATE=template0; EOF python3 create_db.py admin@admin.com admin python3 manage.py db stamp head python3 manage.py runserver   Ok, let’s start with the beginning, we assume here you know how to open a linux terminal, and that you’re on linux. Description of commands Updating the repo apt-get update Ok, so let’s start with some assumptions, you’re either on ubuntu, or debian, apt-get is on every debian-based distro, so you can do this on linux-mint as well, but if you’re using other distro you might have to consult with your package manager guide where to get the packets that are needed. Unfortunately the open-event-server has dependencies and in order to get them we’ve used the package manager of ubuntu, that means some packages might not be possible to be found on every distro, and there is currently not support for that. Installing git and sudo apt-get install -y locales git sudo Ok, let’s also assume you’re on a barebones ubuntu installation, as many people are so let’s work on some very basic requirements that you probably already fulfil, git is not installed by default, so here we install it, we also install sudo, and the -y flag is to avoid being asked for confirmation. Setting the locales locale-gen en_US.UTF-8 if [ -z "$LANG" ];then export LANG=en_US.UTF-8;fi Okay so basically some very bare bones linux distributions don’t have the UTF-8 locale enabled, here we generate it in case it hasn’t, and if the $LANG variable…

Continue Readingopen event server installation and docker-compose

Voice interfaces

Voice Interfaces (This blogpost is based on a meetup that was held about voice interfaces). Evolution of User Interfaces From keyboards, to GUIs, to Touch, and now Voice! First we used punch cards to talk to machines, it was very costly and was used mostly as research, it was our first experience with machines. Perforated cards. At the beginning of computing we found the first large computers, huge and noisy boxes and complexity of management that only universities and large companies could afford to drive. How it worked is that you wrote your code and used a machine that punched holes through it and the machine would parse this data, convert it to its own internal representation and start a job for it, these machines were new and many people wanted to use it, so there were long queues in order to use these machines, it was very expensive, and difficult to use, there was no user interface design at all. And they were not designed for the public, so they didn’t have that in mind. Fortunately, it was evolving and access to them was increasingly common although they were still products mainly for work issues. They were years in which some punched cards were used to enter the different orders, it was not comfortable at all. But Xerox arrived and its graphic interface. An important leap when interacting with a computer because, so far, only through command lines could and should be an expert. Unfortunately few remember the people of Xerox Parc and their contribution to the world of information technology. Most think of Apple and its Macintosh or Microsoft and Windows 1.0 as the "inventors" of graphical interfaces. But no, it was not like that. What we can not deny to Apple and Microsoft, especially to the former, is their weight in the creation of personal computers and their democratization so that we could all have one. Then comes the command line Command Line (or CLI) It is an Interface, a method to manipulate written instructions to the underlying program below. This interface is customary to call System console or command console. It interacts with the information in the simplest way possible, without graphics or anything else than the raw text. The orders are written as lines of text (hence the name), and, if the programs respond, they usually do so by putting information in the following lines. Almost any program can be designed to offer the user some kind of CLI. For example, almost all PC games in the first person have a built-in command line interface, used for diagnostic and administrative tasks. As a primary work tool, the command lines are mainly used by programmers and system administrators, especially in Unix-based operating systems; in scientific and engineering environments; and by a smaller subset of advanced home users. To use a command line, you just need the keyboard, you type commands and the computer replies to those queries, it is still widely used today because it is…

Continue ReadingVoice interfaces

Converting Drupal into WordPress

Converting Drupal into Wordpress Wordpress has the ability to update automatically, while our existing Drupal systems require time to take care manually. This is the main reason for our switch. So the first thing I did was to google for a plugin, I found about the fg-drupal2wordpress plugin however it only passes the posts, and doesn't import users/media/tags unless you pay about 40€, so naturally I stayed off from it, my second intuition was to look it up on Github, and what I found it was also a plugin and it was open source, I used it, and it worked for almost anything except the images. original website, even some images are missing now because the website is not on / The gallery Challenges The thing with the images is that the Drupal that I was given used a very old version of the drupal "Images" add-on, so the plugin converts these images as "posts", but they're not actually converted as images. as you can see, some posts are blank In drupal, every blogpost and image are stored as a Node sql query result of “SELECT nid,file_usage.fid,node.title,files.filepath FROM perspektive.node INNER JOIN perspektive.file_usage ON perspektive.node.nid = perspektive.file_usage.id INNER JOIN perspektive.files ON perspektive.file_usage.fid = perspektive.files.fid”, as you can see we can see where the images are stored, what node id do they have, and what file id they have according to Drupal. In order to pass these images to wordpress you need a script that reads every image location, copies it into the correspondent wp-content/uploads/YYYY/MM/ directory, makes a thumbnail of every image, and you have to add around 5 columns to the wp_postmeta table, if you do that, you’ll effectively import all the images to the media file in wordpress, and that is without counting the tags, and the author of every image! It is not an easy task! Proposals One solution to this problem was to add an `img` tag for every of these posts so that every posts has their own image, and this works in principle, you might even add an "image" tag to every image post, however they're not actually appended on the wordpress media library, so if you want a page to act as a gallery, it won't work in that sense. What would you need to do is to add every image as a media gallery on wordpress, you can do this, but you have to convert a lot of images, and it's not easy because you also have to create thumbnails, you could kind of automate this process using WP-CLI, and add a lot of images with it, the problem would be that the images keep their post id or node id on drupal, and copying the information that is contained on the drupal database, it's not hard but it takes time to do. Another one was to actually update the open source plugin myself which is what I started to do, so I started digging into its code to find out how it worked, I probably…

Continue ReadingConverting Drupal into WordPress