How to get the database of a running container from gcloud kubernetes

Dumping Databases with Kubernetes from Google Cloud

We have the eventyay version 1 currently running on Google Cloud, inside gcloud, I did not find documentation on this from open event, so what must be done is, we must first understand how this system is set up. This is probably easier there is already some overall knowledge of how kubernetes or gcloud actually works, but of course it takes time if there isn’t great knowledge on this topic.

This is the google cloud interface

There are many buttons many things that can be done, so while exploring with them..

There is a console button!

When clicked on this button, this appears

You get a fully integrated terminal emulator with access to a virtual linux environment where your project is, and this is actually super because you get full control of your project and what you can do. It is no longer needed to learn how to navigate through the google cloud GUI and configure everything, because you already have your own shell!

So the first things to care about is know what commands are available that help us for our purpose there are 2 important commands, and those are, “kubectl” and “gcloud” shell commands.

Right now gcloud commands won’t be particularly useful, what we need to know right now is we need to know the pods that are being run, right now. (Pods are little virtual machines, like the containers in Docker)

Awesome, we can find a list of the pods that we have available to view, I assume that web-1299653859-mq59r is the pod where open event is stored

We can confirm this by doing  kubectl get services, we then get the ports in which the services are open, we see that web is port 8080 so, that’s probably were open-event is

Through kubectl exec -it web-1299653859-mq59r — /bin/bash we can get inside the container! Great so now that we’re on the open event container, we try to look up how to connect to the database, the environment variables just show the ports and the ip address but the configuration file where the password actually is it doesn’t seem it can be found in plain view.. What to do?

Getting inside postgres container directly

After some time digging around the container, I give up trying to look for the username and password and just get the idea to go to the container directly.

So there you go. A list of the databases running on postgres.

We now use pg_dump to dump the database.

kubectl exec -it postgres — su postgres -c “pg_dump opev”

We pipe the output to a file.

Then we can download it from here, and that’s how to get the dump from the database.

References:

  1. https://kubernetes.io/docs/reference/kubectl/cheatsheet/
  2. https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/
Continue ReadingHow to get the database of a running container from gcloud kubernetes

Ember js as a web development framework.

Ember js as a web development framework.

Open Event on the frontend side uses a framework called Ember-js, and it is very necessary to understand it, if one has to contribute to the frontend, otherwise it will be extremely hard considering how big and complex ember can actually be, this is a collection of quick facts I’ve collected about ember.js, knowing this might also aid you in contributing to open-event even faster!

Ember.js is a very opinionated web framework that uses a lot of the new ES6/ES7 features that provides a MVC workflow for devs to create websites in a more organized way, and introduces many new concepts, into designing user interfaces.

Ember.js is better to create webapps, as it is to create a blog for example. Since it uses the MVC perspective of creating user interfaces. Another point I can mention is that ember.js is very non-intuitive, it has a big learning curve, and that means, editing ember.js webapps has an investment cost for every person that wants to edit them.

This is independently if they already have knowledge of JavaScript, HTML, CSS, the reason is, ember.js has its own internal language, so if you read the output of the code it generates, you’re obliged to learn how the entire system works. It works not very differently from a node.js application converted into web  (browserify), the HTML code the user agent downloads is very small, it downloads two javascript files, one just defines functions with all your user data, (your routes, templates, compiled handlebars files), and another one that controls the other, it has the virtual machine interpreter, and boots the website up so to say. And that’s mainly one of the reasons why big websites that use ember.js take so much to load. Because every time you enter the website you download the entire website in a single javascript file, (of course once it is cached this should be done quicker, so the first time will usually last very long compared to others), it isn’t surprising because most modern web frameworks actually do this.

I think it’s very important to know some details about how this specific framework works.

As for the user side (the developer) is pretty much wonderful. You have an application.hbs file where your main app (the main user interface) is there, and you can just use commands to generate routes, when you add routes and they’re not assigned to any parent, they’re added as “children” of the main app.

Before we get lost in the new Jargon, let me explain quickly what a route is, when you’re visiting a website you have in your URL a path of the website to whatever current file you’re looking, every new URL you visit might get added on your history if you have it enabled, and you’re allowed to go back and forth in history! 

Every address is also a part of the website, a website is traditionally organized as a file system structure, there are directories, and there are files, which you can download. You can have subdirectores in directories, and nest and organize things according to what you think it’s important.

When you load an ember.js with a different url, say you’re on the main page “scheme://host:port/” and then you click on settings “/settings”, what usually happens is that your browser will be redirected to “scheme://host:port/settings” and will usually reload a new page, in ember.js you actually download the same page no matter which url you visit, but ember.js notices which route are you seeing and provides the right interface for that, and when you click on a link from the same website you don’t actually reload the website, the site manipulates the history so that the history thinks, you’re visiting another part of the website, and you actually do. This is done with the history api[1]

So when you add routes, you can see it as some sort of  “subdirectories” on your ember.js web application. We also have components, and a component is a part of the application you can reuse through your website, like a template (a controller template, not an HTML template), there are also HTML templates as well, these are the handlebar files, ember.js compiles these into its own bytecode, when they’re loaded, they’re loaded by the virtual machine in ember.js.

There are many rules in ember in how to do things, and things are organized very specific according to some guidelines, that means ember projects won’t be the best in getting people involved many quickly, but hopefully it will reduce the number of errors, or bad practices, if people stick to the guidelines ember provides.

Resources

http://emberjs.jsbin.com/?html,css,js,output 

 


[1] http://html5doctor.com/history-api/

Continue ReadingEmber js as a web development framework.

open event server installation and docker-compose

Installing Open Event Server 

I’m going to walk you through the process of installing Open Event Server and possible issues you might encounter, this will probably be helpful if you’re stuck with it, but most importantly in building some sort of installer as I’ve stated numerous times before.

Requirements for its installation

Debian based distro or unix-like with aptitude package manager

Enough memory and storage, (depends on whether you put the database on your system or not)

Knowledge of UNIX-like systems

First, here are the commands, I tried to make them as easy as a copy-paste but it probably won’t work depending on it, so I’ll talk to you about what each thing does

apt-get update
apt-get install -y locales git sudo
locale-gen en_US.UTF-8
if [ -z "$LANG" ];then export LANG=en_US.UTF-8;fi

export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true
git clone https://github.com/kreijstal-contributions/open-event-api.git --depth=1
cd open-event-server
#libffi6 libffi-dev
apt-get install -y redis-server tmux  libssl-dev vim postgresql postgresql-contrib build-essential python3-dev libpq-dev libevent-dev libmagic-dev python3-pip wget ca-certificates python3-venv curl && update-ca-certificates && apt-get clean -y

service redis-server start
service  postgresql start
cp .env.example .env
#sed -i -e 's/SERVER_NAME/#SERVER_NAME/g' .env
#The above was used because before the repository came with an annoying SERVER_NAME which only listened to localhost, so if you accessed with 127.0.0.1, it didn’t work
#pip install virtualenv
#python 2.7.14
#virtualenv .

python3 -m venv .
#pip install pip==9.0.1
source bin/activate
#pip3 install -r requirements/tests.txt
pip3 install --no-cache-dir -r requirements.txt
#pip install eventlet
cat << EOF | su - postgres -c psql
-- Create the database user:
CREATE USER john WITH PASSWORD 'start';

-- Create the database:
CREATE DATABASE oevent OWNER=john
                                  LC_COLLATE='en_US.utf8'
                                  LC_CTYPE='en_US.utf8'
                                  ENCODING='UTF8'

TEMPLATE=template0;
EOF
python3 create_db.py admin@admin.com admin
python3 manage.py db stamp head
python3 manage.py runserver 

 

Ok, let’s start with the beginning, we assume here you know how to open a linux terminal, and that you’re on linux.

Description of commands

Updating the repo

apt-get update

Ok, so let’s start with some assumptions, you’re either on ubuntu, or debian, apt-get is on every debian-based distro, so you can do this on linux-mint as well, but if you’re using other distro you might have to consult with your package manager guide where to get the packets that are needed.

Unfortunately the open-event-server has dependencies and in order to get them we’ve used the package manager of ubuntu, that means some packages might not be possible to be found on every distro, and there is currently not support for that.

Installing git and sudo

apt-get install -y locales git sudo

Ok, let’s also assume you’re on a barebones ubuntu installation, as many people are so let’s work on some very basic requirements that you probably already fulfil, git is not installed by default, so here we install it, we also install sudo, and the -y flag is to avoid being asked for confirmation.

Setting the locales

locale-gen en_US.UTF-8
if [ -z "$LANG" ];then export LANG=en_US.UTF-8;fi

Okay so basically some very bare bones linux distributions don’t have the UTF-8 locale enabled, here we generate it in case it hasn’t, and if the $LANG variable doesn’t exist, then we add the en_US locale.

Really ensuring it works everywhere, especially because once you start installing packages you might get prompted or errors might come up
Cloning the repo

Explanation

git clone https://github.com/fossasia/open-event-server.git --depth=1

Here we clone the repo with git clone from github.

However we use the –depth=1 flag, that means we only and just only get the latest version, git was designed as a version control tool, not as a source host, so when you clone a project with the purpose of installing it, you’re downloading the whole revision history, and some revision histories can get pretty, pretty large, slowing the download time.

If you’re a contributor and you want to revert some git commits, or you want to debug and see what causes something, then feel free to download everything, but if you’re downloading the git with the purpose to use the software, you really don’t need to download the revision history.

cd open-event-server

A bit of trivia for those who didn’t know, cd means “Change Directory”, here we’re changing our current directory to open-event-server

#libffi6 libffi-dev
apt-get install -y redis-server tmux  libssl-dev vim postgresql postgresql-contrib build-essential python3-dev libpq-dev libevent-dev libmagic-dev python3-pip wget ca-certificates python3-venv curl && update-ca-certificates && apt-get clean -y


Okay, here we use apt-get to install redi-server, which is a dependency for open-event, and postgresql, we also download python3 plus some important packets like virtualenv for python3.

Starting services

In this specific case we’re installing these servers on our server but it doesn’t necessarily have to be that way, if you have an external postgres or redis server you can use that as well.
service redis-server start

We start the redis-server

service  postgresql start

We start the postgres service

cp .env.example .env

Virtual python enviroments

python3 -m venv .

We initialize the virtualenv on the current directory, that is .

source bin/activate

We activate the virtualenv generated by .venv

pip3 install --no-cache-dir -r requirements.txt

Postgres

cat << EOF | su - postgres -c psql
-- Create the database user:
CREATE USER john WITH PASSWORD 'start';

-- Create the database:
CREATE DATABASE oevent OWNER=john
                                 LC_COLLATE='en_US.utf8'
                                 LC_CTYPE='en_US.utf8'
                                 ENCODING='UTF8'
TEMPLATE=template0;
EOF

A very clever use of command line pipes, basically psql is the command you use to interact with the database, but it is usually interactive, however on the command line you can very easily pipe typed commands using cat, this is equivalent as if we executed su – postgres -c psql and then typed everything until EOF, so next time you need to automate something that prompts you from input, you can use this!

Also psql won’t generally accept connections from anyone buy postgres, even if you’re root that’s why we’re using su – postgres -c psql  which means, run psql as user postgres, you don’t need to use su, if you know how to change users with sudo

Open event

python3 create_db.py admin@admin.com admin
python3 manage.py db stamp head
python3 manage.py runserver

Using docker-compose install both versions

I’ll explain now, how to do it using docker-compose, which concurrently boots both server and frontend.

git clone https://github.com/Kreijstal/oevent-docker-compose.git
cd oevent-docker-compose
bash clone.sh
docker-compose build

docker-compose run v1 python create_db.py

You will be prompted to add admin account for the v1 server. After you do that just do

docker-compose up

(v2 log in settings you can change it in docker-compose.yml)
And that’s it! Way faster than the other way, requires even less code, and almost always succeed), the catch is that you have to have docker installed in your machine

Continue Readingopen event server installation and docker-compose

Voice interfaces

Voice Interfaces

(This blogpost is based on a meetup that was held about voice interfaces).

Evolution of User Interfaces

From keyboards, to GUIs, to Touch, and now Voice!

First we used punch cards to talk to machines, it was very costly and was used mostly as research, it was our first experience with machines.

Perforated cards.

At the beginning of computing we found the first large computers, huge and noisy boxes and complexity of management that only universities and large companies could afford to drive.

How it worked is that you wrote your code and used a machine that punched holes through it and the machine would parse this data, convert it to its own internal representation and start a job for it, these machines were new and many people wanted to use it, so there were long queues in order to use these machines, it was very expensive, and difficult to use, there was no user interface design at all. And they were not designed for the public, so they didn’t have that in mind.

Fortunately, it was evolving and access to them was increasingly common although they were still products mainly for work issues. They were years in which some punched cards were used to enter the different orders, it was not comfortable at all.

But Xerox arrived and its graphic interface. An important leap when interacting with a computer because, so far, only through command lines could and should be an expert. Unfortunately few remember the people of Xerox Parc and their contribution to the world of information technology. Most think of Apple and its Macintosh or Microsoft and Windows 1.0 as the “inventors” of graphical interfaces. But no, it was not like that. What we can not deny to Apple and Microsoft, especially to the former, is their weight in the creation of personal computers and their democratization so that we could all have one.

Then comes the command line

Command Line (or CLI) It is an Interface, a method to manipulate written instructions to the underlying program below. This interface is customary to call System console or command console. It interacts with the information in the simplest way possible, without graphics or anything else than the raw text. The orders are written as lines of text (hence the name), and, if the programs respond, they usually do so by putting information in the following lines.

Almost any program can be designed to offer the user some kind of CLI. For example, almost all PC games in the first person have a built-in command line interface, used for diagnostic and administrative tasks. As a primary work tool, the command lines are mainly used by programmers and system administrators, especially in Unix-based operating systems; in scientific and engineering environments; and by a smaller subset of advanced home users.

To use a command line, you just need the keyboard, you type commands and the computer replies to those queries, it is still widely used today because it is one of the easiest interfaces to make and the most powerful.

CLI implementations

Programs that use CLI to interact with the kernel of an operating system are often called shell or shell interpreters. Some examples are the various Unix shells (sh, ksh, csh, tcsh, bash, etc.), the historical CP / M, and the DOS command.com, the latter two strongly based on the RSTS and RSX CLIs DEC. Microsoft’s next operating system, Windows Vista, will accept a new command-line interface called MSH (Microsoft Shell, codename Monad), which hopes to combine features of traditional Unix shells with its .NET object-oriented framework. .

Some applications provide both a CLI and a GUI. An example is the AutoCAD CAD program. The scientific / engineering package of numerical computing Matlab does not provide GUI for some calculations, but the CLI can perform any calculation. The Rhinoceros 3D three-dimensional modeling program (used to design the boxes of most portable phones, as well as thousands of other industrial products) provides a CLI (whose language, by the way, is different from the Rhino script language) . In some computing environments, such as the Smalltalk or Oberon user interface, most of the text that appears on the screen can be used to give commands.

The three-dimensional games or simulators for PC usually includes a command line interface, sometimes as the only means to perform certain tasks. Quake, Unreal Tournament or Battlefield are just some examples. Generally in these environments the commands start with a “/” (slash).

Example

The command “list files”, under various programs:

 

Program

Comand

Type of program

CMD        

Matlab

TACL

Quake

Dir

Dir

FILEINFO

/dir

Windows Shell

Matrix processing

Guardian Shell

PC Game

                

        

                                        

Graphical User Interface

Then the GUI was made, (Graphical User Interface), now people who had no idea about computer science, could use a computer, in a GUI everything is more natural, and it is not needed to understand the internals of a computer in order to use it.

After that comes the touchscreen which was popularized by Apple.

                                            Natural

╔══════════╦════════════════════════╦═══════════════════════╗
║ Personal   ║  Touch   Voice Interface    ║ ■■■■■■■■■■■■■■■■■■■■  ║
║ Common     ║  Graphical (Mouse;Keyboard) ║ ■■■■■■■■■■■■            ║
║ Pro        ║  Command line               ║ ■■■■                      ║
║ Research   ║  Punch Cards                ║ ■                         ║
╚══════════╩════════════════════════╩═══════════════════════╝

                                            Mechanic

The interface emerges as evolution of the command line interfaces that were used in the first operating systems and is basic for a graphical environment.

Windows desktop environments, GNU / Linux X-Window or Mac OS X, Aqua are some of the best-known examples of graphical user interface. For the user to interact and establish a more comfortable and intuitive contact with the computer, the graphical user interface has become common use. An interface is the device that allows communication between two systems that do not speak in the same language. By interface is defined the set of connections and devices that facilitates communication between two systems and also to the visible side of the programs, as presented to users to interact with the computer.

It implies the presence of a monitor or screen that through a series of menus and icons represent the options that the user can choose within the system.

The characteristics of an efficient interface could be:

– Fixed and permanent representation of a specific context of action.

– Ease of understanding, learning and use.

– The object of interest must be easy to identify.

– Ergonomic design through the establishment of menus, toolbars and easily accessible icons.

– Interactions based on manual actions on elements of visual or auditory code and on menu selections with syntax and order.

IGU is a user interface in which a person interacts with digital information through a graphical simulation environment. This system of interaction is called WYSIWYG (What you see is what you get) and in it, the objects, icons of the graphic interface behave as metaphors of the action and the tasks that the user must perform.

A voice interface is just the next step, it is a way to interact with the machine, through voice commands, you say commands out loud and the machine attempts to interpret them, and based on this information it executes a task, this is indeed for humans more natural. It will be a challenge for machines but that’s what SUSI.AI is, we do have another contenders and those are Cortana, Siri, Alexa and Google assistant.

Alexa button:

In the Amazon Alexa one can find many game skills, like Jeopardy, or CYOA games, but Amazon wants to try out these colourful buttons

Alexa recently released “Alexa buttons”, which you can hook with an Alexa assistance, which can change colours and it can be pushed, in order to play “who can push the button the fastest” games and provide another input to alexa, also it is possible to create experiences by controlling the lights of the buttons, the speed of the transition or gradient into other colours, etc.

Storyline

Storyline is a service that provides a web user interface where you can create Amazon Alexa skills very intuitively, Similar to node red, you click and drag boxes in configurations in order to give instructions on how to add depending on conditions, for example if the user asks “Where is the nearest bar?”, you can expect the question and set a directive accordingly.

There are strong limitations in respect of the freedom you don’t have if you use the API directly but there are also big advantages of Storyline is that you might be able to quickly create mockups in a very short time which can be very useful in hackathons, or in sketches of actual skills. You can also create an alexa skill from a spreadsheet from Google sheets.

Snips:

Snips is a service that offers you the tools to create a voice assistant, completely by yourself without the need for it to be on the cloud, this means there is no privacy concerns for those who advocate privacy, however you must train it with data yourself, and you assume the costs of collecting and organizing the data, this is helpful for companies with private data that want their own voice assistants and they want to not be monitored by Amazon, Google, Apple, Microsoft, or any of the giant tech companies.

Snips is targeted for makers as well, people that want to create their own personal assistant without being limited by most of the restrictions of the other voice assistants.

Sources:

  1. http://ucipedia.uci.cu/index.php/L%C3%ADnea_de_comandos1
Continue ReadingVoice interfaces

Converting Drupal into WordPress

Converting Drupal into WordPress

WordPress has the ability to update automatically, while our existing Drupal systems require time to take care manually. This is the main reason for our switch.

So the first thing I did was to google for a plugin, I found about the fg-drupal2wordpress plugin however it only passes the posts, and doesn’t import users/media/tags unless you pay about 40€, so naturally I stayed off from it, my second intuition was to look it up on Github, and what I found it was also a plugin and it was open source, I used it, and it worked for almost anything except the images.

original website, even some images are missing now because the website is not on /

The gallery

Challenges

The thing with the images is that the Drupal that I was given used a very old version of the drupal “Images” add-on, so the plugin converts these images as “posts”, but they’re not actually converted as images.

as you can see, some posts are blank

In drupal, every blogpost and image are stored as a Node

sql query result of “SELECT nid,file_usage.fid,node.title,files.filepath FROM perspektive.node INNER JOIN perspektive.file_usage ON perspektive.node.nid = perspektive.file_usage.id INNER JOIN perspektive.files ON perspektive.file_usage.fid = perspektive.files.fid”, as you can see we can see where the images are stored, what node id do they have, and what file id they have according to Drupal.

In order to pass these images to wordpress you need a script that reads every image location, copies it into the correspondent wp-content/uploads/YYYY/MM/ directory, makes a thumbnail of every image, and you have to add around 5 columns to the wp_postmeta table, if you do that, you’ll effectively import all the images to the media file in wordpress, and that is without counting the tags, and the author of every image! It is not an easy task!

Proposals

One solution to this problem was to add an `img` tag for every of these posts so that every posts has their own image, and this works in principle, you might even add an “image” tag to every image post, however they’re not actually appended on the wordpress media library, so if you want a page to act as a gallery, it won’t work in that sense. What would you need to do is to add every image as a media gallery on wordpress, you can do this, but you have to convert a lot of images, and it’s not easy because you also have to create thumbnails, you could kind of automate this process using WP-CLI, and add a lot of images with it, the problem would be that the images keep their post id or node id on drupal, and copying the information that is contained on the drupal database, it’s not hard but it takes time to do.

Another one was to actually update the open source plugin myself which is what I started to do, so I started digging into its code to find out how it worked, I probably found some errors while investigating, and I also searched for forks of the project since apparently the original author was not replying to PRs or Issues, the project had been abandoned, so if I needed help, I could only help myself.

Thoughts

I always knew about CMS, and how they worked, but never got into them because I never felt I was fully of control or that I understood the website, however now that I investigated about this issue, I feel I have a more intrinsic understanding about Drupal and WordPress, and I might be more prompt to use them myself in the case that I need to.

I find it surprising that it is really hard to find converters or conversors for SQL to other formats (like JSON), there were some web converters but they were web only and couldn’t handle the data size. At the end I used PHPMyAdmin, which was probably what I was looking for, and I did learn a ton about SQL.

Resources

Author: Mario Behling, Website: https://web.archive.org/web/20060615070835/http://www.perspektive89.com:80, Title: Perspektive89

Author: Dries Buytaert, https://api.drupal.org/api/drupal, Title: Drupal API reference

Author: WordPress, Website: https://developer.wordpress.org/reference/, Title: WordPress Reference

Continue ReadingConverting Drupal into WordPress