Jobs: Update Ruby on Rails, Transition Storage and Fix Open Bugs on Voicerepublic.com

STATUS: CLAIMED

We are looking for a remote freelance developer with the ability to fix issues and update the open source code base of Voicerepublic.com using Ruby on Rails, storage and other web technologies. Goals: To ensure security through updating to latest versions, to fix issues impacting functionality, to ensure a working transition from S3 to Backblaze and to enable the automatic deployment of the project from GitHub to a Debian server. 

This freelance project would include a serious functionality check after the updates. Changes and deployment should be documented following best practices.

[Apply Here]

The Github pages are

Ruby is already updated to 2.4 and working on Debian Buster (See the “buster” branch in the project repository). Other technologies include: Capistrano, Websocket, AWS S3 (on Backblaze), ClojureScript (Clojure), CoffeeScript, ActiveAdmin, AngularJS, Cdist. The website should be deployed (again) to VoiceRepublic (voicerepublic.com).

The following needs to be taken care of as part of the project.

1. Updates and Dependencies

Please update to well-supported Ruby and Rails versions to ensure the system can run smoothly. Desired versions:

  • Update Ruby 2.4.N to Ruby 2.7.1 
  • Update Rails 4.2.0 to Rails 6.1.x.
  • Bump dependencies to latest version (also see automatic Dependabot PRs)

2. Data Sources and Deployment

  • Switch to Backblaze’s S3 as storage engine (data is already transferred)
  • Do changes in uploading code needed for compatibility with B2, also see https://github.com/voicerepublic/voicerepublic_dev/issues/891 
  • Rename “Integration” branch to “development” branch. Deploy development and master branch automatically with Travis to run the system including admin app (backend). For settings use environment variables on Travis.
  • Automatically create docker images
  • Add (semi)automatic tests to ensure Upload and Streaming functionality works on Backblaze (start with manual tests)
  • Set up Vercel or another suitable service to create a test installation for each PR

3. Office Backend

  • Solve issues – batch actions, enable delete and show “public page link” – in back office to re-enable administration tasks
  • Add system config settings, e.g. Backblaze S3 keys, Mailgun, other config options into the backoffice settings UI

4. Voicerepublic User System

  • Fix missing images and ensure all media files come from internal resources (not external)
  • Unlink Streamboxx page https://voicerepublic.com/pages/streamboxx. We currently don’t provide this feature but might come back to it later again.
  • Update dead/outdated links to blog, help, etc. e.g. to Twitter it should be https://twitter.com/VoiceRepublic_
  • Delete tawk.to box service
  • Fix links to public pages
  • Take out Facebook Login (comment out the code in case we come back later)
  • Fix RSS issue resulting in a lot of resource usage
  • Add a privacy respecting captcha for user sign up
  • Check validity of https://github.com/voicerepublic/voicerepublic_dev/blob/integration/CONFERENCE.md and move any still relevant content to Readme.md. Then delete file.
  • Move deployment info to folder /docs, delete any outdated content and update deployment info. Current file at https://github.com/voicerepublic/voicerepublic_dev/blob/integration/DEPLOYMENT.md 
Continue ReadingJobs: Update Ruby on Rails, Transition Storage and Fix Open Bugs on Voicerepublic.com

Uploading Files via APIs in the Open Event Server

There are two file upload endpoints. One is endpoint for image upload and the other is for all other files being uploaded. The latter endpoint is to be used for uploading files such as slides, videos and other presentation materials for a session. So, in FOSSASIA’s Orga Server project, when we need to upload a file, we make an API request to this endpoint which is turn uploads the file to the server and returns back the url for the uploaded file. We then store this url for the uploaded file to the database with the corresponding row entry.

Sending Data

The endpoint /upload/file  accepts a POST request, containing a multipart/form-data payload. If there is a single file that is uploaded, then it is uploaded under the key “file” else an array of file is sent under the key “files”.

A typical single file upload cURL request would look like this:

curl -H “Authorization: JWT <key>” -F file=@file.pdf -x POST http://localhost:5000/v1/upload/file

A typical multi-file upload cURL request would look something like this:

curl -H “Authorization: JWT <key>” -F files=@file1.pdf -F files=@file2.pdf -x POST http://localhost:5000/v1/upload/file

Thus, unlike other endpoints in open event orga server project, we don’t send a json encoded request. Instead it is a form data request.

Saving Files

We use different services such as S3, google cloud storage and so on for storing the files depending on the admin settings as decided by the admin of the project. One can even ask to save the files locally by passing a GET parameter force_local=true. So, in the backend we have 2 cases to tackle- Single File Upload and Multiple Files Upload.

Single File Upload

if 'file' in request.files:
        files = request.files['file']
        file_uploaded = uploaded_file(files=files)
        if force_local == 'true':
            files_url = upload_local(
                file_uploaded,
                UPLOAD_PATHS['temp']['event'].format(uuid=uuid.uuid4())
            )
        else:
            files_url = upload(
                file_uploaded,
                UPLOAD_PATHS['temp']['event'].format(uuid=uuid.uuid4())
            )


We get the file, that is to be uploaded using
request.files[‘file’] with the key as ‘file’ which was used in the payload. Then we use the uploaded_file() helper function to convert the file data received as payload into a proper file and store it in a temporary storage. After this, if force_local is set as true, we use the upload_local helper function to upload it to the local storage, i.e. the server where the application is hosted, else we use whatever service is set by the admin in the admin settings.

In uploaded_file() function of helpers module, we extract the filename and the extension of the file from the form-data payload. Then we check if the suitable directory already exists. If it doesn’t exist, we create a new directory and then save the file in the directory

extension = files.filename.split('.')[1]
        filename = get_file_name() + '.' + extension
        filedir = current_app.config.get('BASE_DIR') + '/static/uploads/'
        if not os.path.isdir(filedir):
            os.makedirs(filedir)
        file_path = filedir + filename
        files.save(file_path)


After that the upload function gets the settings key for either s3 or google storage and then uses the corresponding functions to upload this temporary file to the storage.

Multiple File Upload

 elif 'files[]' in request.files:
        files = request.files.getlist('files[]')
        files_uploaded = uploaded_file(files=files, multiple=True)
        files_url = []
        for file_uploaded in files_uploaded:
            if force_local == 'true':
                files_url.append(upload_local(
                    file_uploaded,
                    UPLOAD_PATHS['temp']['event'].format(uuid=uuid.uuid4())
                ))
            else:
                files_url.append(upload(
                    file_uploaded,
                    UPLOAD_PATHS['temp']['event'].format(uuid=uuid.uuid4())
                ))


In case of multiple files upload, we get a list of files instead of a single file. Hence we get the list of files sent as form data using
request.files.getlist(‘files[]’). Here ‘files’ is the key that is used and since it is an array of file content, hence it is written as files[]. We again use the uploaded_file() function to get back a list of temporary files from the content that has been uploaded as form-data. After that we loop over all the temporary files that are stored in the variable files_uploaded in the above code. Next, for every file in the list of temporary files, we use the upload() helper function to save these files in the storage system of the application.

In the uploaded_file() function of the helpers module, since this time there are multiple files and their content sent, so things work differently. We loop over all the files that are received and for each of these files we find their filename and extension. Then we create directories to save these files in and then save the content of the file with the corresponding filename and extension. After the file has been saved, we append it to a list and finally return the entire list so that we can get a list of all files.

if multiple:
        files_uploaded = []
        for file in files:
            extension = file.filename.split('.')[1]
            filename = get_file_name() + '.' + extension
            filedir = current_app.config.get('BASE_DIR') + '/static/uploads/'
            if not os.path.isdir(filedir):
                os.makedirs(filedir)
            file_path = filedir + filename
            file.save(file_path)
            files_uploaded.append(UploadedFile(file_path, filename))


The
upload() function then finally returns us the urls for the files after saving them.

API Response

The file upload endpoint either returns a single url or a list of urls depending on whether a single file was uploaded or multiple files were uploaded. The url for the file depends on the storage system that has been used. After the url or list of urls is received, we jsonify the entire response so that we can send a proper JSON response that can be parsed properly in the frontend and used for saving corresponding information to the database using the other API services.

A typical single file upload response looks like this:

{
     "url": "https://xyz.storage.com/asd/fgh/hjk/12332233.docx"
 }

Multiple file upload response looks like this:

{
     "url": [
         "https://xyz.storage.com/asd/fgh/hjk/12332233.docx",
         "https://xyz.storage.com/asd/fgh/hjk/66777777.ppt"
     ]
 }

You can find the related documentations and example payloads on how to use this endpoint to upload files here: http://open-event-api.herokuapp.com/#upload-file-upload.

 

Reference:

Continue ReadingUploading Files via APIs in the Open Event Server

Deploying Yacy with Docker on Different Cloud Platforms

To make deploying of yacy easier we are now supporting Docker based installation.

Following the steps below one could successfully run Yacy on docker.

  1. You can pull the image of Yacy from https://hub.docker.com/r/nikhilrayaprolu/yacygridmcp/ or buid it on your own with the docker file present at https://github.com/yacy/yacy_grid_mcp/blob/master/docker/Dockerfile

One could pull the docker image using command:

docker pull nikhilrayaprolu/yacygridmcp

 

2) Once you have an image of yacygridmcp you can run it by typing

docker run <image_name>

 

You can access the yacygridmcp endpoint at localhost:8100

Installation of Yacy on cloud servers:

Installing Yacy and all microservices with just one command:

  • One can also download,build and run Yacy and all its microservices (presently supported are yacy_grid_crawler, yacy_grid_loader, yacy_grid_ui, yacy_grid_parser, and yacy_grid_mcp )
  • To build all these microservices in one command, run this bash script productiondeployment.sh
    • `bash productiondeployment.sh build` will install all required dependencies and build microservices by cloning them from github repositories.
    • `bash productiondeployment.sh run` will run all services and starts them.
    • Right now all repositories are cloned into ~/yacy and you can make customisations and your own changes to this code and build your own customised yacy.

The related PRs of this work are https://github.com/yacy/yacy_grid_mcp/pull/21 and https://github.com/yacy/yacy_grid_mcp/pull/20 and https://github.com/yacy/yacy_grid_mcp/pull/13

Resources:

Continue ReadingDeploying Yacy with Docker on Different Cloud Platforms