How to divide an S3 bucket with per customer paths and achieve secure file access

One S3 bucket to rule them all

Bahadir Balban

Buzz Founder

@saasboxengineering

How about users with their own keys accessing your S3 bucket?

Summary:

You can divide a single S3 bucket into per-customer paths, and allow those customers to control read or write access, only to their own /username path. You do this by giving each customer an AWS IAM user and attaching a policy that lets them only access their /username path.

Customers can do accelerated uploads using signed S3 urls, and make their files available to public temporarily and securely (e.g. behind a paywall).

Use case: Your customer’s customer signs up to their service, and downloads a file they want from your customer’s signed S3 url. The file is in your S3 bucket.

If you want to go one step further and let users make available their files for download via a CDN, it is not instantly supported by Cloudfront. This is because each user has their own keys to their /username path, but Cloudfront has one master key. You can’t generate a per-user Cloudfront key for a single S3 bucket the way you generate IAM keys. There is a hack for this shared below as well, or simpler: just use signed S3 urls for downloads.

Details

While building SaaSBox, I needed to create a storage hosting solution where each customer has access to their own files, for read and write. I needed a simple solution that works well for many users. I ended up with a single S3 bucket, dividing it into customer paths starting with /username.

Here is how it works:

Set up a single s3 bucket. Each time a new user/customer signs up, you create a new IAM user in AWS, attaching the following policy to the user:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowGroupToSeeBucketListInTheConsole",
            "Action": [
                "s3:ListAllMyBuckets",
                "s3:GetBucketLocation"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::*"
            ]
        },
        {
            "Sid": "AllowRootAndHomeListingOfCompanyBucket",
            "Action": [
                "s3:ListBucket"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::saasbox-files"
            ],
            "Condition": {
                "StringEquals": {
                    "s3:prefix": [
                        "",
                        "/"
                    ],
                    "s3:delimiter": [
                        "/"
                    ]
                }
            }
        },
        {
            "Sid": "AllowListingOfUserFolder",
            "Action": [
                "s3:ListBucket"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::saasbox-files"
            ],
            "Condition": {
                "StringLike": {
                    "s3:prefix": [
                        "${aws:username}/*",
                        "${aws:username}"
                    ]
                }
            }
        },
        {
            "Sid": "AllowAllS3ActionsInUserFolder",
            "Action": [
                "s3:*"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::saasbox-files/${aws:username}/*"
            ]
        }
    ]
}

The policy has the ${aws:username} placeholder, which means, it applies to each IAM user with the policy attached.

Make sure to also tag IAM users when creating them so that you know these are users of your service.

NOTE: You must attach the policy to the IAM user, not the S3 Bucket.

Making S3 content private and only available via signed urls

What you want to achieve is that your S3 bucket contents are always private, except:

  • When your users want to, they should be able to write to their directory.

  • They should be able to make their files public for download whenever needed (in my case right after they sell them).

You achieve this using signed urls. S3 buckets support signed urls for upload and download. Here is the code you need in order to generate signed urls:

S3 Signed url for reading:

/* S3 signed url for reading */
exports.get_file_read_presigned_url = function(fpath, ftype) {
	const url = s3.getSignedUrl('getObject', {
		Bucket: s3bucket.url,
		Key: fpath,
		ResponseContentType: ftype
	});
	return url;
}

S3 signed url for uploading:

S3 Signed url for writing:
/* S3 signed url for uploading files */
exports.get_file_upload_presigned_url = function(fpath, ftype) {
	console.log("s3bucket.url:", s3bucket.url)
	const url = s3.getSignedUrl('putObject', {
		Bucket: s3bucket.url,
		Key: fpath,
		ACL: 'authenticated-read',
		ContentType: ftype
	});
	return url;
}

Using Cloudfront CDN for caching files

Instead of an S3 signed url for reading, ideally you should set up cloudfront on the S3 bucket and sign urls using cloudfront. Here is how you do that:

const signer = new AWS.CloudFront.Signer(s3bucket.cf_accessKeyId, s3bucket.cf_privateKey);
const twoDays = 2*24*60*60*1000

/* Cloudfront signed url for reading */
exports.get_file_read_presigned_url = function(fpath, ftype ) {
	const signedUrl = signer.getSignedUrl({
  		url: s3bucket.cdn_url + "/" + fpath,
  		expires: Math.floor((Date.now() + twoDays)/1000), // Unix UTC timestamp for now + 2 days
	})
	return signedUrl;
}

I learned that you can enable accelerated uploads for S3 and you can continue to use S3 signed urls for upload, and cloudfront for making them available for read.

Fine grain access to S3 files using per-user paths by your users with Cloudfront enabled

This is something I wanted to achieve, e.g. if I can create IAM users with per user directory access on the S3 bucket with their own keys, I would also want to serve their files using a CDN such as cloudfront, with *them* signing the urls using their keys.

Unfortunately this is not immediately supported by Cloudfront. E.g. the use case is you create some master key for Cloudfront using your AWS root account and make available all files signing with your key.

If you want your users to make available their own directory path on the S3 bucket using their own keys, it is not possible with Cloudfront, since you have 1 master key.

The simple solution is just use S3 signed urls without cloudfront. You can serve thousands of users on a single s3 bucket!

There is a workaround to using Cloudfront though and it is described at this link: How to use S3 signed urls with Cloudfront.

Here is the summary of the situation and workaround: By nature, S3 signed urls change each time they are generated. As a result, each new url means re-caching by Cloudfront, defeating the purpose of having a cache. Therefore what you do is, you force/hack the S3 signed url generation function to generate the same url for a period of time, fixing the time element artificially to a window. E.g. for the current hour + 1 hour, tell it to generate this specific url only. This way CF can cache the url for that period.

If you directly generate the URL via CloudFront you don’t have this problem since Cloudfront has direct access to the file.

But yes you can do it, by fixating the generated url by S3, and re-caching the file every few hours.




Join The Discussion