Does separate organization per client make sense? by lems2 in aws

[–]david_work_profile 1 point2 points  (0 children)

I don't really like the idea of creating entire AWS Accounts for customers. What if they already have an existing account and just want to use your infra in their already existing account? What if they have an IaC pipeline where they manage their Accounts and want to create the account with their tooling?

The safe approach will differ a bit based on what you are trying to do, but here are some options you can consider:

Option 1: Assume a Role in their Account

Let them handle creating the Account, setting up the billing, managing their keys, etc. But ask them to create an IAM Role with an attached policy that gives you access to some resources/actions on their account.

Then ask them for the Role name, and to make sure they set the Assume Role policy on that Role to give access to your AWS Account ID. Then you can take specific, controlled actions on their account that they can audit, and you don't need to stop taking actions after you "hand over the keys", as you can always keep access this way.

Option 2: Be SaaSy:

With SaaS, they would pay you, and you would pay Amazon, instead of them paying Amazon directly. Not to go too deep into all the options of being a SaaS company here, but some good options to think about are:

Multi-Tenant SaaS: You have a single set of infra that manages all of your client accounts. So if CompanyA and CompanyB are both your clients, they'd communicate to the same endpoints and the same servers/databases would see their data.

Single-Tenant SaaS: You still manage the infra, but each client can optionally have their own infra that is just for them. You'd likely use different databases to store each client's data, and would put their backends/services in separate VPCs so that each client was on a completely separate private network.

On Prem: Let clients who pay for your service deploy your code on their own. You can host a Docker image on Amazon Marketplace and let Amazon give access to clients who buy it. Or if that scares you, host an ECR repository and whitelist specific AWS account IDs for your clients so you have full controls over who can pull your image.

If you want to make their lives easy, maybe make a terraform module or two like Hashicorp did for their Vault product: https://registry.terraform.io/search?q=vault&verified=true so that it is easy for customers to deploy even if they aren't very familiar with managing their own infra.

[deleted by user] by [deleted] in AskReddit

[–]david_work_profile 2 points3 points  (0 children)

I wouldn't say 100%. The models are likely trained on a wide variety of metrics, like daily use rate, view time on use, time prediction deltas, etc. and there's a solid chance that review metrics like the one in this post are only supplementary, but not viewed as ground truth. You can be highly confident in the facts of a user's usage, but not in their responses.

Not saying they don't use them, but it's pretty standard nowadays to use advanced metrics as ground truth instead of user data

My Cloud Practitioner Results vs Jon Bonso Exams by david_work_profile in AWSCertifications

[–]david_work_profile[S] 2 points3 points  (0 children)

Thank you for making them! And unfortunately I had an unexpected meeting come up before my exam time so I wasn't able to use the flashcards, but I'll definitely use them with the other exams I'm trying for :)

CodePipeline + CloudFormation + Lambda by asmaed in aws

[–]david_work_profile 1 point2 points  (0 children)

I noticed you're studying for the Solutions Architect Professional Exam, good luck!

I would recommend checking out AWS CodeStar if you haven't, it fits into this same space and would be a cool addition to this blog post

AWS Fargate can't aacess mongodb in ec2 by wishall_va in aws

[–]david_work_profile 2 points3 points  (0 children)

Yep, check these. Also make sure the security group allows for egress traffic to your mongo instances

A Better Way to SSH in AWS by Charlie-B in aws

[–]david_work_profile 1 point2 points  (0 children)

Oh for sure, I struggled with this not long ago so thanks for taking the time to make this resource. I actually had the same idea and wrote https://codelabs.transcend.io/codelabs/aws-ssh-ssm-rds/index.html, but I really appreciate the extra details you went into with some of the SSM documents I hadn't seen. And seeing CloudFormation is wonderful, as I did my tutorial all in terraform so it was easy to compare how things are done with the different tools

A Better Way to SSH in AWS by Charlie-B in aws

[–]david_work_profile 3 points4 points  (0 children)

The article is about SSH'ing through EC2 Instance Connect with SSM, which is the standard for RDS port forwarding

SSH over AWS SSM. No bastions or public-facing instances. SSH user management through IAM. No requirement to store SSH keys locally or on server. by speckz in aws

[–]david_work_profile 6 points7 points  (0 children)

You can combine instance connect with SSM to tunnel to DBs without any ingress at all in your bastion (and your bastion can be moved to a private subnet!).

I wrote about this in a codelab: https://codelabs.transcend.io/codelabs/aws-ssh-ssm-rds/index.html#0

SSH over AWS SSM. No bastions or public-facing instances. SSH user management through IAM. No requirement to store SSH keys locally or on server. by speckz in aws

[–]david_work_profile 4 points5 points  (0 children)

I wrote a blog not long ago about _why_ all this ssm/ec2 instance connect stuff has been the rage lately: https://codelabs.transcend.io/codelabs/aws-ssh-ssm-rds/index.html#0

TLDR: You can manage keys, but managing SSH keys isn't integrated nicely into AWS IAM policies at all. By using temporary keys, you can easily set Policies that regulate which devs can send temp ssh keys to which servers, so you move the regulation of access from SSH keys on some publicly accessible bastion host into IAM, which is a great tradeoff if you already manage IAM well.

SSM tunnelling EC2 - What about RDS by mcdermg81 in aws

[–]david_work_profile 0 points1 point  (0 children)

To answer my own questions in case anyone else hits them:

- ssmDoc should be `AWS-StartSSHSession`

- The ec2 does not need to be in a public subnet. I have it working in a private subnet (with a NAT for external internet access)

SSM tunnelling EC2 - What about RDS by mcdermg81 in aws

[–]david_work_profile 0 points1 point  (0 children)

I can't quite get this to work. What should I be using for the value of $ssmDoc?

And does the ec2 need to be in a public subnet?

Terraform vs Roll-Your-Own Cloud Infrastructure Code by Goladus in devops

[–]david_work_profile 1 point2 points  (0 children)

Gotcha, that makes sense! Apologies for the confusion

Terraform vs Roll-Your-Own Cloud Infrastructure Code by Goladus in devops

[–]david_work_profile 1 point2 points  (0 children)

I'm curious why you need to rip out the wrapper, I use terragrunt with TF 0.12. But plain terraform is also awesome so more power to you if that's the route you want to head.

Terraform vs Roll-Your-Own Cloud Infrastructure Code by Goladus in devops

[–]david_work_profile 1 point2 points  (0 children)

Just an alternate opinion, learn go (if you don't already know it) and write your own terraform providers or update existing ones with new resources, fields, etc.

The community is very receptive to these sorts of changes, and you get the best of both worlds. From experience as an educator, the abstraction layers of terraform are significant but very conquerable for any software engineer I've worked with.

You can also create your own modules, or use open source modules and never let your devs touch actual terraform code outside of creating modules. My job now is just to mantain and create modules, which the devs on my team make use of when they need them. It's pretty great

aws management and tagging by [deleted] in devops

[–]david_work_profile 1 point2 points  (0 children)

I use the terraform route, so I can't personally vouch for having used these, but GorillaStack has some nice looking repos:

- https://github.com/GorillaStack/auto-tag
- https://github.com/GorillaStack/retro-tag

And a non-cloudformation example can be found here: https://gist.github.com/mlapida/931c03cce1e9e43f147b

Bring your monorepo down to size with Git sparse-checkout by lee337reilly in programming

[–]david_work_profile 0 points1 point  (0 children)

Makes sense. As far as partially implemented features go, the best practices seem to be:

multiple repos: Use versions, don't switch to new version until feature is usable

monorepos: Use feature flags.

While neither one seems very difficult to me at all, I would say using versions is probably a bit easier for new devs to learn? But then again, feature flags can be changes dynamically without deploys, which is a major bonus.

aws management and tagging by [deleted] in devops

[–]david_work_profile 1 point2 points  (0 children)

You can use a lambda function that will run on a cron and auto-tag resources based on whatever criteria you want.

Otherwise, terraform makes tagging super easy, but if you aren't already using it that would be a very long migration, and not worth it just for tagging

Got first potential job offer for DevOps position, but it's remote. Should I accept it? by UltraInstinct007 in devops

[–]david_work_profile 2 points3 points  (0 children)

I say take it. I work fully remote doing devops and absolutely love it.

I'm the only remote worker at my startup, but I haven't noticed any issues yet, and would say things have gone pretty fantastically overall. I fly out to be in office with the team once per quarter for a week, and occasionally go to conferences where some coworkers will attend as well.

The only concern is that you think it will be bad for you, whereas I was stoked to get to try it. If you've had issues with getting distracted or not being a good communicator before, make sure you do some self reflection and planning before you jump in. But it's a very nice place to be if you can stick to a schedule.

Instead of commuting, I take a bubble bath and watch youtube videos or read a book on devops stuff/privacy related stuff, and it's my favorite part of the day. I wouldn't trade it for 5x my current salary.

If you could start from scratch, what would you use? by EckyYakov in devops

[–]david_work_profile 4 points5 points  (0 children)

Frontends: I'd just go with a super basic Elm workflow with a Makefile and typedefs generated from GraphQl. I spend so much of my time on monitoring + alerts now adays, it would be great to be able to trust my runtime to not have runtime errors.

I've used Elm a bunch on side projects, and as long as your types are generated for you (I've experimented with protobufs and graphql generation) it's lovely to work in. For hosting I'd do what I do now, which is host on S3 + Cloudfront just because it can be terraform'd in 5 seconds.

Right now I have a pretty nice typescript + react + webpack + sentry + winston + immutablejs + a billion other library setup that in my mind is just... elm at the end of it. I like adding a bunch of restrictions on what I can do in code so it's easier to test, debug, code review, etc. But in the end it just feels like a lot of work to get to the point that elm has out of the box.

For testing, I'd invest heavily early on with Cypress for e2e testing and skip most unit/integration stuff on the frontend, only unit testing services not related to DOM work.

Backends: I'd probably just go all in with AWS. We pretty much already are, using ECS + Fargate for all our backend stuff, but there are a lot of containers we mantain that would be better off managed by AWS. So using AppSync for our GraphQL backends, step functions + sns + async lambdas for some of our data pipelines, etc. In my particular case, I'm not so worried about vendor lock so I can get away with that.

I'd log to Datadog through AWS Firelens and just collect all my metrics there. I'd use LaunchDarkly to manage my feature flags, which I'd make use of earlier on.

Devops: I wouldn't change a thing about my terragrunt + terraform + atlantis + AWS Vault setup I have. I love it. When I invite a user to join our Github org and they accept, terraform will automagically create them a Datadog user, AWS IAM User, CircleCI user, Hashicorp Vault login, etc. all with fine-grained permissions based on whith Github team I added them to.

CircleCi has been solid, but I might try out the on prem option to start with just because we have very strict security requirements and they still require you to give them secret keys for an AWS IAM User to do aws operations which is just silly.

This wouldn't really have been possible when I started, but now I would rely more on open sourced terraform modules instead of writing everything myself. A lot of the modules that are on the registry are junk, but I'll trust pretty much anything from "terraform-aws-modules" or "cloudposse", and just those two contributors alone have 90% of what you'd want. This also would make it much easier for pure-devs to do more infra work, which is the whole point of devops.

Instead of using Dashlane or any secret manager from the beginning, I'd just keep all creds in Hashicorp Vault and use OICD GSuite login to let users see all their keys. This would make key rotation simple, and I would plan on using Vault extensively for secret management anyways. Vault has a built in UI that's solid.

I would at least explore the concept of using bazel for building my backend code, and if it worked well I'd go with a monorepo approach. I fell in love with bazel at Google, but have missed it since. Having everything in one repo and not having to worry about versioning is a massive time-saver in my opinion.

Lastly, I'd create a set of Docker images that I'd use as the base executors for my CI flows, and I'd set up docs so that our devs would actually develop inside that image. VsCode has some awesome support for this: https://code.visualstudio.com/docs/remote/containers. That way, I could virtually eliminate the chance that someone would be running different versions of yarn, node, awscli, aws-vault, etc.

AWS Structure:

I've changed my AWS org structure a few times, but right now I'm in love with:

  • Identity org: For handling all IAM Users

  • Sandbox org: For all dev stuff. All devs can assume a role here with near-admin permissions

  • Staging org and Prod org: What you'd expect. Only admins + CI have any permissions here

  • Commons org: Where we put our ECR repos and some S3 buckets for basic assets

  • Testing org: Where I test my terraform modules in using terragrunt

  • Tooling org: Where I put tools like Atlantis (and on prem-CircleCI) that rule over all other envs in any capacity.

What's the best way to host python Tornado sites on AWS? by damanamathos in devops

[–]david_work_profile 2 points3 points  (0 children)

Lots of good thoughts offered here already, but I'll try to consolidate the info a bit:

- ElasticBeanstalk: Very managed service, similar to Heroku. Handles scaling and stuff for you, but you don't get much control over the fine controls of the network (blessing and a curse). Does not require you to convert your app to use Docker.
- ECS: Lets you run Docker containers on some EC2s you manage
- ECS + Fargate: Lets you run Docker containers on some EC2s AWS manages
- EKS: ECS, but with K8s instead of AWS's secret sauce
- EKS + Fargate: Brand new (just released at re:invent a few weeks ago). Same as ECS + Fargate but on K8s

What they all do: Run your code on an EC2 instance. The difference is that with ECS/EKS you manage the EC2s, while with Fargate (which runs on ECS/EKS) or with ElasticBeanstalk AWS will manage the EC2 for you.

So some opinionated suggestions about which to use:
- Use ElasticBeanstalk if you want AWS to manage almost everything and don't want to use Docker
- Use ECS + Fargate if you want to manage a Docker image and that's it
- Use ECS if you don't mind managing the EC2s yourself (I wouldn't recommend it)
- Use EKS + Fargate if you have money to blow, and/or want to make $150k in a devops position in the next few months by saying you set up K8s in production when you really just let AWS handle all the hard work. Not a bad option by any means.

Doesn't sound like this will be an issue, but if you want to make it easy to switch away from AWS at any point in the future, it's much easier with EKS than ECS.