Set proper content type when uploading files on s3 with python-magic

In the open-event-orga-server project, we had been using Amazon s3 storage for a long time now. After some time we encountered an issue that no matter what the file type was, the Content-Type when retrieving this files from the storage solution was application/octet-stream.

An example response when retrieving an image from s3 was as follows:


Accept-Ranges →bytes
Content-Disposition →attachment; filename=HansBakker_111.jpg
Content-Length →56060
Content-Type →application/octet-stream
Date →Fri, 09 Sep 2016 10:51:06 GMT
ETag →"964b1d839a9261fb0b159e960ceb4cf9"
Last-Modified →Tue, 06 Sep 2016 05:06:23 GMT
Server →AmazonS3
x-amz-id-2 →1GnO0Ta1e+qUE96Qgjm5ZyfyuhMetjc7vfX8UWEsE4fkZRBAuGx9gQwozidTroDVO/SU3BusCZs=
x-amz-request-id →ACF274542E950116

 

As seen above instead of providing image/jpeg as the Content-Type, it provides the Content-Type as application/octet-stream.While uploading the files, we were not providing the content type explicitly, which seemed to be the root of the problem.

It was decided that we would be providing the content type explicitly, so it was time to choose an efficient library to determine the file type based on the content of the file and not the file extension. After researching through the available libraries python-magic seemed to be the obvious choice. python-magic is a python interface to the libmagic file type identification library. libmagic identifies file types by checking their headers according to a predefined list of file types.

Here is an example straight from python-magic‘s readme on its usage:


>>> import magic
>>> magic.from_file("testdata/test.pdf")
'PDF document, version 1.2'
>>> magic.from_buffer(open("testdata/test.pdf").read(1024))
'PDF document, version 1.2'
>>> magic.from_file("testdata/test.pdf", mime=True)
'application/pdf'

 

Given below is a code snippet for the s3 upload function in the project:


file_data = file.read()
    file_mime = magic.from_buffer(file_data, mime=True)
    size = len(file_data)
    # k is defined as  k = Key(bucket) in previous code
    sent = k.set_contents_from_string(
        file_data,
        headers={
            'Content-Disposition': 'attachment; filename=%s' % filename,
            'Content-Type': '%s' % file_mime
        }
    ) 

 

One thing to note that as python-magic uses libmagic-dev as a dependency and many of the distros do not come with libmagic-dev pre-installed, make sure you install libmagic-dev explicitly. (Installation instructions may vary per distro)


sudo apt-get install libmagic-dev

Voila !! Now when retrieving each and every file you’ll get the proper content type.

 

Using S3 for Cloud storage

In this post, I will talk about how we can use the Amazon S3 (Simple Storage Service) for cloud storage. As you may know, S3 is a no-fuss, super easy cloud storage service based on the IaaS model. There is no limit on the size of file or the amount of files you can keep on S3, you are only charged for the amount of bandwidth you use. This makes S3 very popular among enterprises of all sizes and individuals.

Now let’s see how to use S3 in Python. Luckily we have a very nice library called Boto for it. Boto is a library developed by the AWS team to provide a Python SDK for the amazon web services. Using it is very simple and straight-forward. Here is a basic example of uploading a file on S3 using Boto –

import boto
from boto.s3.key import Key
# connect to the bucket
conn = boto.connect_s3(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
bucket = conn.get_bucket(BUCKET_NAME)
# set the key
key = 'key/for/file'
file = '/full/path/to/file'
# create a key to keep track of our file in the storage
k = Key(bucket)
k.key = key
k.set_contents_from_filename(file)

The above example uploads a file to s3 bucket BUCKET_NAME.

Buckets are containers which store data. The key here is the unique key for an item in the bucket. Every item in the bucket is identified by a unique key assigned to it. The file can be downloaded from the urlBUCKET_NAME.s3.amazonaws.com/{key}. It is therefore essential to choose the key name smartly so that you don’t end up overwriting an existing item on the server.

In the Open Event project, I thought of a scheme that will allows us to avoid conflicts. It relies on using IDs of items for distinguishing them and goes as follows –

  • When uploading user avatar, key should be ‘users/{userId}/avatar’
  • When uploading event logo, key should be ‘events/{eventId}/logo’
  • When uploading audio of session, key should be ‘events/{eventId}/sessions/{sessionId}/audio’

Note that to store user ‘avatar’, I am setting the key as /avatar and not /avatar.extension. This is because if user uploads pictures in different formats, we will end up storing different copies of avatars for the same user. This is nice but it’s limitation is that downloading file from the url will give the file without an extension. So to solve this issue, we can use the Content-Disposition header.

k.set_contents_from_filename(
	file,
	headers={
		'Content-Disposition': 'attachment; filename=filename.extension'
	}
)

So now when someone tries to download the file from that link, they will get the file with an extension instead of a no-extension “Choose what you want to do” file.

This covers up the basics of using S3 for your Python project. You may explore Boto’s S3 documentation to find other interesting functions like deleting a folder, copy one folder to another and so.

Also don’t forget to have a look at the awesome documentation we wrote for the Open Event project. It provides a more pictorial and detailed guide on how to setup S3 for your project.

 

{{ Repost from my personal blog http://aviaryan.in/blog/gsoc/s3-for-storage.html }}