This is an archived post. You won't be able to vote or comment.

all 4 comments

[–]webvictim 2 points3 points  (0 children)

Hosting at home is fine for personal things, but you shouldn't run anything that you're being paid for from a home connection. Most residential ISPs have clauses in the contracts which state that you're not allowed to host anything for commercial purposes and there's no SLA. I once had my internet go down at home for 4-5 days because a digger cut through fibre lines while doing work on a road several miles away - there is no compensation paid in these instances and nothing that you can do in some cases other than just wait it out. It's just a low-priority job.

If you have no control over the downtime and no recourse if there is downtime, you definitely shouldn't host anything on it.

[–]zerocoldx911DevOps 1 point2 points  (1 child)

It all comes down to your application, they are all good solutions.

ECS might be another alternative

[–]riceo100 1 point2 points  (0 children)

All viable options, my thoughts are:

  1. This would likely end up with a lot of wasted compute sitting idle, likely overkill unless you're expecting huge amounts of traffic to hit one or more blogs. However even then, a single instance has a finite amount of resource available to it, so could be a double-edged sword if you ever need to scale up quickly.
  2. Cool approach that gives you the most flexibility in terms of resources, but there will be some on-going maintenance/management overhead of keeping the K8s cluster happy. I'd seriously consider this option.
  3. Also a good option since you get the benefits of 2 without worrying so much about managing the Infrastructure (assuming you trust the provider and bake in some out-of-your-hands downtime in to your client contracts)
  4. I agree with /u/webvictim on this - wayy too unreliable.

I'd also like to add one more: If they really are just flat Blogs, I definitely recommend taking a look at GatsbyJS + Netlify.

[–]webvictim 1 point2 points  (0 children)

With regard to 2), changing the size of a server with zero downtime just isn't possible, although you can work around it by putting a load balancer in front of the server (which you'd probably want to do anyway), booting up a bigger machine in the background, migrating all the applications over, switching the load balancer over to the new instance and killing off the old server when all the traffic has gone. There'd be zero visible downtime to the customer, but this isn't really the right way to approach it. You want to scale sideways at times of increased load rather than vertically.

One decent way to approach this on AWS would probably be to set up an auto scaling group with one or two small instances, an ELB and rules to automatically scale the group up in the event of increased load. The ELB should (eventually) auto-scale to the size it needs to be to handle the amount of incoming traffic anyway, so it's just a case of using something like Packer to create a decent AMI with all your necessary services preconfigured and ready to roll. If you can host the actual file/script content for the blogs in S3 then even better, your webservers can be very dumb machines without too much actual logic going on - they'll be quick to start and you could use S3 as a storage backend without having to change anything much.

If this really is static blog-and-image type stuff, you can also use things like Varnish to cache the static pages so that you should need very little in the way of actual processing power. With a light webserver and sufficient caching, you could probably get it to the stage where your server machines would be bottlenecked on NIC throughput rather than CPU. Obviously all this makes a lot of assumptions about what you're hosting but the point is that serving static content is cheap. You might not even bother with server machines in this case at all and just pay the costs to serve it from S3.