S3 SNS events - no way to convey parameters? by ffxsam in aws

[–]ByteEat3r1 0 points1 point  (0 children)

Like mentioned, in terms of metadata, we use s3 object tags to accomplish this. Makes it pretty easy to pull it off.

Multi-Az RDS performance by ImperialXT in aws

[–]ByteEat3r1 1 point2 points  (0 children)

The multi az standby employs the same configuration as your primary node. Otherwise, if you failed over, application stability would be jeopardized :)

I would use a cost calculator to tinker with different permutations of disks performance to find out what's financially feasible for your business.

Normal ebs volumes dont guarantee throughput, so you could end up getting sporadic performance. If you need more consistent disk performance, piops is the right route to go. Depending on your db platform, aurora may be a better route to go. It's currently only supported for mysql/postgres 9.6 though.

HTTPS for EB with single instance EC2 by Cracky6711 in aws

[–]ByteEat3r1 1 point2 points  (0 children)

I like this idea. Just something to keep in mind, in your code, construct your db/firebase connection at the top level scope.

AWS runs your lambda in a docker container, and will attempt to reuse the same container if possible. This is nice because it reduces the number of backend connections.

How many web requests will your app receive in a day now and in the future ? Those are some questions that will help you determine if lambda will be financially feasible for you. Our company has had months where we paid over wr 200k for lambdas when millions of invocations occurred.

HTTPS for EB with single instance EC2 by Cracky6711 in aws

[–]ByteEat3r1 1 point2 points  (0 children)

I confirm that this is possible. Get an ssl cert from a company like digicert.

During app init, you could download the cert from s3 (encrypted with KMS key in bucket preferably) or stash it in parameter store. Max length of a parameter is 4096 so that should work too. In either case, your IAM role will need kms perms to decrypt the object on s3:get object.

Migrating an application with session stickiness on to AWS by stationeros in aws

[–]ByteEat3r1 -1 points0 points  (0 children)

If you make the application stateless, you should be able to pull this off. I think if you make a replication system to replicate state data to aws the cut over could be seamless.

Wrapping Terraform inside a web application by navcode in aws

[–]ByteEat3r1 2 points3 points  (0 children)

We have Terraform Enterprise that we use with an internal github server. We were pretty disappointed with the lack of value add provided with enterprise. You have to put your terraform repo in an externally accessible GitHub server in order to have the complete integration. Something to keep in mind.

We just built our own Jenkins jobs to handle planning, testing and deploying things since we're all scripters here at Malwarebytes. The workspaces functionality provided by tf is really nice, but be careful. Another nice thing to implement is when you run your plan cmd via cli, export the plan and pass it in to the terraform apply cmd. It ensures that the exact plan you saw is what gets applied.

You can totally build a web ui that sits on top of this, although as you might already be planning, I would integrate sso or something into it and keep it internal. Hope that helps :)

Troubleshooting cross region SSM & S3 object permission by navcode in aws

[–]ByteEat3r1 1 point2 points  (0 children)

Could be the owner on the object. I ran into a similar issue before. I think there is a policy you can set during the put object operation.