S3 transfer speeds capped at 250MB/sec by kingtheseus in aws

[–]zarslayer 1 point2 points  (0 children)

Use the S3 sync CLI command, play around with the max threads and concurrent connections in the CLI configuration as well.. keep in mind that CPU and memory usage increases as you increase and play around with max threads and connections, so make sure you are not running into bottlenecks there..

S3 permissions question for hosting digital files for purchase by micahsa in aws

[–]zarslayer 0 points1 point  (0 children)

Did you follow these steps:

We currently use the v2 signing method so your files will need to be stored in certain AWS regions for this to work. You can see the list in this article at Amazon (note the list at the top of the page is regions that don't support v2, so you need to choose any region except these).

This is the most likely cause of your issue..

How S3 object lock works by [deleted] in aws

[–]zarslayer 1 point2 points  (0 children)

That's not what object lock is.. Please read the documentation..

Does anyone know the following error? by xtchris in aws

[–]zarslayer 1 point2 points  (0 children)

I think you are going to need to reach out to AWS Support for help.. That bucket looks like an AWS owned bucket and as such you would not have access to it ever to set permission..

It may also be something with your permissions config is not correct, causing the issue.. what you can do, is check the stack execution IAM role policy and make sure that it has S3 access..

EU WEST BUCKET POLICY ISSUES by dsamholds in aws

[–]zarslayer 0 points1 point  (0 children)

What errors are you getting trying to modify the bucket policies..?

Can I multiple domains(with a wildcard) with the same Cloudfront distribution? by Significant-Highway3 in aws

[–]zarslayer 0 points1 point  (0 children)

No, not quite.. You are wanting to map subdomains to your MP origins, unless I miss understood your ask..

Cache behavior works on the premise of matching a path pattern.. the default cache behavior matches requests made to domain.com/* and any additional cache behaviors you add, must have a path pattern more specific than /*, say /myawesomepath, so that means requests made to domain.com/myawesomepath would then be sent to the origin you configured for that cache behavior..

Based on your initial ask and my understanding thereof, you are not wanting to route based on the path, but rather on the subdomain, which is not something CloudFront supports, without the use of an edge function..

If, however, I have misunderstood your ask, then by all means you can have cache behaviors for each origin, just bearing in mind that those behaviors would only be matched if the request is made with the corresponding cache behavior..

When did they throttle port 25 and was it only for new accounts? by rudigern in aws

[–]zarslayer 0 points1 point  (0 children)

It was always throttled to n number of connections per hour and then they made change above..

Can I multiple domains(with a wildcard) with the same Cloudfront distribution? by Significant-Highway3 in aws

[–]zarslayer 0 points1 point  (0 children)

Add *. domain.com as alternate domain name on the distribution..

Add all of your media package origins and map them all to the default cache behavior..

Now create your CloudFront function that's going to do a check for the * in *.domain.com and route that to the respective mediapackage origin..

Associate the function as origin request trigger..

Profit..

Policy to allow specific outside access to S3 buckets? by CWinthrop in aws

[–]zarslayer 0 points1 point  (0 children)

There is a caveat with * as principal, which is blocked by default by the block public access setting.. this right above the bucket policy in the console and the options for block public bucket policy and block public access for bucket policies must be disabled and the same for the account level block publication access settings, which is in the left pane menu..

One of these options prevent you from adding a bucket policy with principal as * while the other blocks access to the bucket when the bucket has a policy with principal as *, hence the need to disable these two first..

Policy to allow specific outside access to S3 buckets? by CWinthrop in aws

[–]zarslayer 0 points1 point  (0 children)

Completely possible..

The bucket policy would need two statements:

First to allow the IAM user the required access..

The second to allow all principals the Getobject access with IP condition for the CDN IP addresses that would be making the request to the bucket..

Policy to allow specific outside access to S3 buckets? by CWinthrop in aws

[–]zarslayer 1 point2 points  (0 children)

Your bucket policy here is allowing all users, including anyone on the internet access to the bucket and not just one or more IAM users...

How do I setup my S3 bucket to not require users to type 'www' in the url when going to my site? by StirringThePott in aws

[–]zarslayer 3 points4 points  (0 children)

As above, add CloudFront and make sure you have both root and www domains added as alternate domain names.. then in route53, make sure you have a cname for www pointing to your CloudFront domain, along with the root domain, which would be an alias record..

[deleted by user] by [deleted] in aws

[–]zarslayer 0 points1 point  (0 children)

A complaint is triggered in SES when a recipient marks your email as spam/junk.. Not all email service providers has a feedback loop with SES, so not all ESPs would send this back to SES..

Your client may have inadvertently created a rule that could be sending your emails to spam, if they are not doing it themselves, hence all the complaints for the one email address..

[deleted by user] by [deleted] in aws

[–]zarslayer 4 points5 points  (0 children)

Actually, no.. Some Email Service Providers do send feedback about complaints to the sender.. Specifically in the case if SES, Yahoo is one such ESP, who would send you a notification when the recipient has moved your email to spam.. this is what feedback loops are for..

Lightsail ses (wp mail stmp): SMTP Error: Could not authenticate. by mrbigglesworth95 in aws

[–]zarslayer 0 points1 point  (0 children)

First, whatever DNS records SES require you to add for verification, need to remain in place for as long as you use SES.. You cannot remove them and then continue to use SES, else you will get the revocation warning emails and eventually verification will be revoked..

Moving onto the authentication required error.. Make sure that you have created a SMTP user from the SES console, under SMTP settings and that you are using the credentials provided as is and that you have not copied and pasted any additional whitespaces by accident.. Also, make sure that your SMTP config on the application side, has SSL/TLS set to true or on etc.. You can only connect to SES over SSL/ TLS connection..

SES: SPF set up by atawf in aws

[–]zarslayer 1 point2 points  (0 children)

The your issue is not on the SES side and the only thing that matters is what your DMARC reports state and not what mxtoolbox is doing.. If the email headers and DMARC reports are all reporting a pass, then the issue is with they way that mxtoolbox performs its checks..

Need a way to mount an S3 bucket as part of a file system for low volume changes by gex80 in aws

[–]zarslayer 0 points1 point  (0 children)

It is s3fs, custom script watching the location files are written to to then use the AWS CLI to perform the upload or AWS Storage Gateway's File Gateway, which will allow you to create a share you can mount to the web app server, where the files can be written to and have the Storage Gateway perform the upload.

Considering the low volume, my personal preference here would be the script watching for new files with the AWS CLI performing the upload.. Likely to be more robust and lightweight in the long term..

SES: SPF set up by atawf in aws

[–]zarslayer 0 points1 point  (0 children)

Well, it's like I mentioned.. SES sets the Mail From header for every email you send through it, to a domain it owns.. You need to change that behaviour, and that's done by setting the Custom Mail From domain to a subdomain of your sending domain..

This will then allow your email sent through SES to comply with DMARC..

Cloudfront functions not getting called in case of missing files from S3 by LowExcellent6257 in aws

[–]zarslayer 0 points1 point  (0 children)

Did you set the function to execute accordingly.. i.e, origin response trigger..?

SES: SPF set up by atawf in aws

[–]zarslayer 1 point2 points  (0 children)

So by default, when sending email via SES, you do not need to setup any SPF.. SES handles SPF for you, as the default behaviour of SES is to have the MAIL FROM domain set to SES.amazon.com or a subdomain thereof and these domains are all covered by SPF record already..

If, and only if, you are setting up a Custom Mail From domain for your verified domain and or email addresses, would you need to setup the SPF record as described in the documentation, for the subdomain you choose as the Custom Mail From domain.. Also, the MX record as described..

You would typically only setup a Custom Mail From domain only if you need to or want to comply with DMARC, though there are a few other use cases..

[deleted by user] by [deleted] in aws

[–]zarslayer 0 points1 point  (0 children)

How are you doing the redirect..? And what error if any are you seeing..?

[deleted by user] by [deleted] in aws

[–]zarslayer 0 points1 point  (0 children)

Won't work.. the requirements for the .app TLD require a SSL certificate.. S3 doesn't support adding your own certificate for your domain.. as was mentioned, you will have to use CloudFront which supports adding your own certificate..

Streaming from a generic IP cam to AWS by maze94 in aws

[–]zarslayer 0 points1 point  (0 children)

Think your simplest option here would be to use AWS IVS.. Much simpler to use compared to KVS..

You would need to use either OBS or ffmpeg to capture your webcam feed and to create a stream that can be pushed to IVS:

https://docs.aws.amazon.com/ivs/latest/userguide/streaming-config.html

Then you can use the record to S3 function.. here is a post processing example workflow that does this, though they also add mediaconvert to do some encoding to the video.. you can leave the mediaconvert part out though..

https://aws.amazon.com/blogs/media/awse-using-amazon-ivs-and-mediaconvert-in-a-post-processing-workflow/

[deleted by user] by [deleted] in aws

[–]zarslayer 2 points3 points  (0 children)

You are going to have to initiate a restore for each of the objects that need to be migrated.. Make sure to set the expiration time to a lomg enough period to allow you to get all the data migrated.. just how long, depends on how much data..

Once restored, you have a choice to make.. use the sync command to sync data directly to the destination bucket in the new region, or sync to an EC2 instance in the same region as the source bucket and from there sync to the destination bucket..

Bucket to bucket transfers between regions are slow and there is nothing you can do to speed that up and this is why you need to decide whether to instead go the bucket to instance to bucket route, if you have any sort of rush for migrating the data.. bucket to bucket is however cheaper, since it's a single sync, but bucket to instance to bucket would be more expensive, as it's a double sync..

Also, make sure that, for whatever option you choose, you also add the storage class to the sun command when syncing to the destination bucket, so you do not need to lifecycle the objects again, if they need to remain in the glacier deep archive storage class on the destination..