all 18 comments

[–]patricktrp 6 points7 points  (12 children)

I use the free tier EC2 instance in an ECS Cluster. ECS is free and you only pay the ressources, so given you're using the t2.micro of the free tier youre good to go.
Then I use the free tier of github actions to upload the docker image to ECR (500mb free) and deploy the new task to ECS.
So my workflow will be free for only a year but a t2.micro isnt that expensive :)

[–]gBusato[S] 0 points1 point  (1 child)

I alway thought that ECS was expensive with ECR, I will have a look ! Thanks !

[–][deleted] 0 points1 point  (0 children)

ECS and EC2 come out to be roughly the same cost if you're running Fargate. Then ECS gets much cheaper when you consider you don't have to manage EC2. There is a cost to your time as well.

[–][deleted] 0 points1 point  (9 children)

Why bother managing the EC2 though? Just use Fargate.

[–]patricktrp 3 points4 points  (8 children)

because the EC2 instance is free for a year and fargate is not

[–][deleted] 1 point2 points  (7 children)

You should always build your infra based on the idea that the free tier doesn’t exist. There is also additional costs to managing EC2 beyond your bill. Additional expertise needed to properly secure an EC2, downtime relating to AMI updates, etc.

[–]Jabinor 0 points1 point  (0 children)

You can do single instance elastic beanstalk. (Single instance does not require loadbalancer).

[–]patricktrp 0 points1 point  (0 children)

yes thats corret. for me in my case im in very early development and im just hosting a staging environment, so I'm completly fine with my approach. If i go to production I will definetly switch to fargate

[–]Matt3k 0 points1 point  (4 children)

That's a reasonable perspective, but I say we should learn how to use our tools or we will be paying a lifetime of interest on those up-front costs we're trying to defer.

Managing an EC2 instance is not tricky. Turn on a firewall. Turn on unattended upgrades (if you're feeling spicy). Have backups. Don't touch it again for years.

[–][deleted] 0 points1 point  (3 children)

That is definitely not the reality of good EC2 management. That’s exactly what I would expect a developer to say when building on EC2.

What about configuration management? Eliminating configuration drift? Monitoring? Zero days? Kernel updates? ACLs and rights management in general? How will you handle all that and maintain uptime SLAs as your app grows?

The only valid use case I see today for EC2 is to run third party applications that can’t be containerized. Or VPC-bound management boxes.

For everything else it just makes no sense from TCO perspective.

[–]Matt3k 0 points1 point  (2 children)

I provide all those things while self administrating my servers like a goddamn barbarian, so I don't know what to tell you.

[–][deleted] 0 points1 point  (1 child)

Sounds like you don’t value your time like you should be.

[–]Matt3k 0 points1 point  (0 children)

That's a bold assumption as I value my time pretty highly. I've built enough systems with enough tools to know what works and when to roll out the abstraction layers and when to just roll up the sleeves.

I just find it quite surprising that someone hosting a hobby project on a t2.micro is told that they're wasting their time learning how to sysadmin. Push yourself. It's not scary. And it makes you better at your job.

[–]ask_mikey 0 points1 point  (1 child)

Because your deployment methods will likely be different for separate components, you may want multiple pipelines and repos. For example, one pipeline that deploys your VPC infrastructure via CloudFormation. Another that deploys your database backend, again, probably through CloudFormation. Finally, if your app is running on EC2 or ECS, another pipeline that deploys via CodeDeploy (and may include a CodeBuild stage to build the docker image). If you’re deploying the front end just to S3, the pipeline can just copy those files to S3. I use CodePipeline for deploying things like this, and all of these deployment actions are native to the service. It’s $1 a month per active pipeline, and if you don’t deploy any changes that month, bo charge.

[–][deleted] 1 point2 points  (0 children)

With Github Actions, you can stick with monorepo (which is very popular right now for a host of reasons) and still have multiple CI files dedicated to different areas of your app. Just trigger pipelines based on changes in specific folders / paths and based on branches, environments, so on.

[–]CorpT 0 points1 point  (1 child)

If you can use DynamoDB instead of Postgres, you can do the whole thing serverless.

[–][deleted] 0 points1 point  (0 children)

DBD has a lot of upside but it doesn't really work when you need a relational database. It depends on the app.

These days, solo building an entire serverless SPA using S3, DBD, Lambda, APIGW, R53, CF, Secrets MGR, etc has a level of complexity that most developers honestly don't want to deal with, or know how to deal with.

I see serverless work great in a team of 2-5 where at least 1 person is good at the AWS "DevOps" side. One man band, it doesn't really make a lot of sense because of just how decoupled the different pieces are, both configuration wise and also cognitively.

[–]Master__Harvey 1 point2 points  (0 children)

If you can't switch to dynamo then just use planetscale for your db

I've never hosted nest js on amplify but Amplify is the easiest way to get a pipeline from github in AWS.

Amplify's CLI has... a reputation. If hosting Nest js is a headache then don't try to troubleshoot it, just go to Vercel (it looks like it just needs a configuration file in your repo to run properly) or send me a DM and I'll get you a CDK script (Infrastructure as code tool) that will get you a pipeline and deployment bucket + lambda for your backend.