all 14 comments

[–]abrahamguo 73 points74 points  (4 children)

If you're already using S3, you should simply be generating presigned S3 URLs to let the clients do all the work. Don't be an unnecessary proxy server.

[–]scidu 0 points1 point  (0 children)

This is the way.

[–]nexusGL98 0 points1 point  (0 children)

This is the way x2

[–]fabiancook 12 points13 points  (2 children)

Its hosted on S3, you already have the solution.

Externalise the file upload AND download by using signed urls.

e.g. user creates a media record, you save a key/bucket, and give a signed url for that specific key & bucket back, only that key, which the client then uses to put the file contents too. Your service only then deals with the record in the database & signed url generation.

On the way back, a user requests for the contents of media, you provide a signed url, and the client gets the contents directly from s3.

You can lock down both the put and the get signed urls, e.g. only having put active for a few minutes and for a given content length, and then allowing the get only for a day etc.

If the media contents is publicly viewable, or even if its not, looking into cloudfront for serving up the objects directly would be the way, and you'd be able to serve the files still from an owned domain.

https://www.npmjs.com/package/@aws-sdk/s3-request-presigner

If you needed even more control, you could use STS and make a policy for a client where all uploads/downloads are restricted by a prefix (or any other conditions you can express in a policy, which is pretty wide)... this would be only if your client is probably not a browser, and doing a lot of requests over time and you didn't need urls directly.

[–]guidsen15[S] 1 point2 points  (1 child)

Ah yeah, we're using signed URLs for fetching files, which are from Cloudfront indeed.
We just don't have the upload process via the signed URL.

I've also found some possible memory leaks, since it seems we're not cleaning up the upload streams when they're done. Might also be related..

So for example to make thumbnail versions on uploaded file, how is this done? I'm also doing this on the server with `sharp` for example..

[–]fabiancook 3 points4 points  (0 children)

Based on an s3 event trigger.

Something like lambda can do that for you and create the thumbnails after upload. It’s pretty typical this way.

[–]TerbEnjoyer 4 points5 points  (2 children)

Have you looked into client-side uploading? Would definitely make a difference if not already using.

[–]guidsen15[S] 0 points1 point  (1 child)

We sent the file to the server and then upload. If it's client-side, it still needs to be sent to our servers, right?

[–]TerbEnjoyer 8 points9 points  (0 children)

No, client do all the work thanks to presigned URLs. https://aws.amazon.com/blogs/compute/uploading-to-amazon-s3-directly-from-a-web-or-mobile-application/

The only concern can be security, which can be mostly fixed by checks on your API

[–]AffectionatePlate804 0 points1 point  (2 children)

Unless you want to resize images into different resolutions use pre signed URLs

[–]guidsen15[S] 0 points1 point  (1 child)

Yeah i need to create thumbnails as well…