I have a route in my config which says that for a page, say /secure
, there is a login required (done via authlogic). A before_filter in my controller takes care of that. That works fine, the page and its resources have restricted access - through the application.
Trouble is, we are using Amazon S3 for storage on this app (based on refinerycms) deployed to heroku. I have a bucket and it works fine.
However, any resource inserted in the secure part of the application is directly accessible through the browser. In other words, the /secure
page contains items like pdf files. While through the app the resources are secured, those pdf files are accessible from anywhere in the Internet (example URL): http://s3.amazonaws.com/my_bucket/images/1234/the_file_which_should_be_secure.pdf
Can I do fine-grained access control on S3? Do I have to create a new bucket? Ideally I'd like to set a flag on my resource which makes it invisible in the Internet - don't know.
Any suggestion welcomed.
P.S. openid.org has an expired ssl cert, so needed to create a new empty account as I could not login
The simplest and easiest solution is just to name your S3 assets with random, unguessable filenames, and then only expose the secret URLs to the people who should have access.
This is how Facebook photos and many other sites work (there is no privacy or security beyond the obscurity of the individual filenames).