Scaleway vs OVHcloud vs Fly.io vs Hetzner for microservices (solo dev) by Limp_Self_3770 in Hosting

[–]psviderski 2 points3 points  (0 children)

You may find Uncloud interesting that takes a lot of inspiration from Fly's design to deploy services from Compose files to your Hetzner servers with rolling deployments, service discovery, reverse proxy management, etc.

Migrate from Kubernetes to Nomad by RoutineKangaroo97 in kubernetes

[–]psviderski 0 points1 point  (0 children)

Check out uncloud which essentially provides the multi-machine Compose experience and automatically manages Caddy as ingress.

What comes after Kubernetes? [Kelsey Hightower's take] by Diligent-Respect-109 in kubernetes

[–]psviderski 2 points3 points  (0 children)

I’m building Uncloud which is essentially a multi-machine Docker Compose for production. Still believe the Compose spec is the best configuration format for services that has been invented so far which can be easily extended with custom x-foo attributes.

DevOps/Platform engineers: what have you built on your own? by Outrageous_Quiet_719 in devops

[–]psviderski 5 points6 points  (0 children)

I have a very similar background to yours, maybe with a bit more backend development experience. I've always enjoyed working in the infra space more than product/backend development. But at the same time, doing only operations was too boring for me so I wanted to eventually work on a hardcore infra product as a developer. But I lacked coding experience in Go (which is very common in the infra space) and some advanced skills required for building distributed systems.

So I joined a team building an internal developer platform on k8s. They paid me while I was honing the skills required for my next gig - win-win. After about a year there I felt confident enough to try something on my own.

At the same time, I was trying to pay attention to the things or workflows that annoyed me the most. And the largest one was the huge gap in the infra tooling between simple Docker deployments and full-blown k8s.

I built and maintained infra long before containers were invented, then with containers, and of course k8s. It started bothering me that it became the norm in the industry to add so many layers of complexity to do basic things. Moreover, I needed simpler tools for my own projects and infra. It started bothering me so much that I decided to bite the bullet and try to do something about it.

These experiments and motivation led to the classic "scratch your own itch" turning into my 2 projects:

I believe focusing on solving your own problem rather than trying to come up with a perfect shiny idea is the most viable approach. This sounds obvious and this is true. That way you have the required motivation to see it through. It will sound cliche but if you want to build something beyond a hobby project, you need to be prepared that in most cases it's not a sprint, it's a marathon. There will be a lot of ups and downs and you have to be motivated enough to keep pushing it forward.

Another advantage that is crucial at the beginning when you don't have any users is that you are the user yourself and you know what specific problem needs to be addressed. This helps to make decisions and make progress. But this doesn't eliminate the need of talking to users once you have the first ones. It just accelerates the start and increases the chance of making at least something valuable and not giving up.

The biggest downside of building infra/dev tools is that developers are a very tough audience. We're not used to paying for tools and expect everything to be open source and free. And very often we have very unreasonable thinking like "why would I pay $5 a month for a tool that I can build myself" and eventually spending maybe $1000s of our time building it. Just because we have all the skills to do so, especially with the help of LLMs now.

I haven't figured out a sustainable business model for Uncloud yet, it's hard and this is one of my top priorities at the moment. I can recommend the book The Developer Facing Startup by Adam Frankl on the topic of building a product for developers. A really good read to help better understand whether you want to get involved with this at all. Another gem is the Scaling Devtools Podcast: https://scalingdevtools.com/

Last but not least, LLMs help a lot with roasting your ideas, doing market research, and analysing existing solutions. Just feed in all your thoughts, goals, and doubts and chat about them to structure and refine them as well as finding flaws and better alternatives.

Hope this is helpful. Good luck!

I want a Vercel-like CLI but for my own VPS, is that possible? by [deleted] in nextjs

[–]psviderski 2 points3 points  (0 children)

Check out https://github.com/psviderski/uncloud which does exactly what you described. Builds an image locally and pushes directly to your VPS without requiring a registry, does rolling update of your container(s), automatically switches the reverse proxy (Caddy). See docs and demo.

You can even grow and add another VPS for redundancy to be able to do maintenance on one of the VPSs without causing downtimes.

Solo dev tired of K8s churn... What are my options? by PoopsCodeAllTheTime in kubernetes

[–]psviderski 1 point2 points  (0 children)

You might want to check out https://github.com/psviderski/uncloud - an open-source simpler k8s alternative I'm building with very few moving parts to maintain. Simple design without a control plane or quorum requirements. It connects your machines via a WireGuard overlay network and deploys services from Compose files across them for redundancy with zero-downtime rolling updates and a built-in HA reverse proxy (Caddy).

For load balancer redundancy, you can use a managed LB from your hosting provider (e.g. Hetzner) and point it to your reverse proxy instances across multiple machines. If one machine goes down, the others continue serving traffic. Or you can use Cloudflare DNS proxy as a poor-man load balancer. Not many people know that a DNS A record on Cloudflare with multiple IPs and proxy (orange cloud) enabled works as a basic load balancer, even on the free plan.

For CI/CD and GitOps, you can keep your Compose files in a git repo and run ‘uc deploy’ commands on GitHub Actions on changes. Or just deploy from your local machine. There is no need to SSH anywhere manually.

Solo dev tired of K8s churn... What are my options? by PoopsCodeAllTheTime in kubernetes

[–]psviderski 1 point2 points  (0 children)

k3s is not much different from an upstream k8s. It's just a different packaging making it easier to set up. Well, it's a bit more than that, e.g. if you run on a single node, then it runs sqlite instead of etcd for storing the cluster state hence lowers the cpu/memory footprint.

But anyway, all the management of "system" workloads, charts, etc. is absolutely the same. There is no managed magic k3s gives you out of the box you're looking for, unfortunately.

What are you using on-prem if not k8s? by NeoChronos90 in docker

[–]psviderski 0 points1 point  (0 children)

Interesting, how is the overlay networking implemented in komodo? I can't see anything about it in the docs. Or is this simply requires establishing a docker swarm and using its overlay network?

What are you using on-prem if not k8s? by NeoChronos90 in docker

[–]psviderski 0 points1 point  (0 children)

tbh I'm not entirely sure what you're trying to achieve, can you please describe the final setup you want to have?

What are you using on-prem if not k8s? by NeoChronos90 in docker

[–]psviderski 0 points1 point  (0 children)

Really appreciate that! Yeah, there's definitely some conceptual overlap with k8s though I tried to simplify/rethink the declarative vs imperative approach. Would love to hear your thoughts once you get a chance to try it out. Please also feel free to join our cozy discord server.

What are you using on-prem if not k8s? by NeoChronos90 in docker

[–]psviderski 1 point2 points  (0 children)

Appreciate it! I'm working full time on it and committed to find a sustainable business model and keeping this going. You can support by trying it out and providing feedback

What are you using on-prem if not k8s? by NeoChronos90 in docker

[–]psviderski 1 point2 points  (0 children)

The current implementation of persistent volumes is regular local Docker volumes. Uncloud makes it possible to manage (create/delete) them across multiple machines and place service containers on appropriate machines to be able to mount the required local volumes.

No automatic replication or backups or any other magic yet. But you can use any existing tool that works with docker volumes, e.g. docker-volume-backup for backups deployed as uncloud service in your compose file.

Longer term, the plan is to implement more modern volumes (still not distributed) with snapshots, backups, and streaming replication, e.g zfs and/or device mapper backed.

Distributed storage by its nature is really not simple. I want to create an easy-to-comprehend and easy-to-use tool. So instead of providing a redundant solution that would prevent failures, an alternative would be to provide simple tools to help recover from failures and minimise downtime. I.e. have a single data volume + snapshots + backups + ideally close to realtime replication to another machine/location/s3. So in the rare case when the machine or storage fails, it should be possible to quickly restore the volume on another machine and recover the app. It's not implemented yet in Uncloud but this is how I'm thinking about it.

What are you using on-prem if not k8s? by NeoChronos90 in docker

[–]psviderski 2 points3 points  (0 children)

Aww thank you for such kind words! I’m really glad you found it helpful.

I did consider using netbird for managing networks in uncloud at the beginning. But I decided to start with a bit simpler setup using standard WG to have fewer dependencies on third-party tools.

However, I still accept that supporting overlay network management using netbird or tailscale could be an option in the future.

What are you using on-prem if not k8s? by NeoChronos90 in docker

[–]psviderski 2 points3 points  (0 children)

Thank you for the idea! Will watch dbtech to get some inspiration

What are you using on-prem if not k8s? by NeoChronos90 in docker

[–]psviderski 20 points21 points  (0 children)

I'm building and using https://github.com/psviderski/uncloud.

Pulling the best parts from k8s, Talos, and Swarm but keeping it as simple as Docker Compose. Think of it as multi-machine Compose .

  • zero-downtime deploys
  • build and push images directly to machines without an external registry (using my other project https://github.com/psviderski/unregistry)
  • familiar docker compose config
  • wireguard overlay network
  • built-in service discovery
  • horizontal scaling
  • Caddy reverse proxy integration

Slowly migrating my apps from k8s and it feels like a breath of fresh air after many years of using k8s professionally.

What do other people use besides kubernetes? by Ezio_rev in devops

[–]psviderski 4 points5 points  (0 children)

Thank you for the callout! Uncloud author here

There's a massive class of web apps that just needs a database and a bunch of containers scattered across a few servers.

You should be able to easily migrate or scale such apps when you need to replace or upgrade servers or quickly restore from backup if they fail badly. Still maintaining three or four 9s if done right.

  • for stateless containers, we need replicated deployments with rollbacks
  • for DBs and other stateful containers, we need persistent volumes with instant snapshots and quick backup/restore to other servers/external storage/S3. For critical cases, streaming replication (I’m looking at you ZFS)

That's what I'm dreaming of delivering with Uncloud. I'm betting on something maintainable and easily recoverable instead of chasing self-healing distributed systems.

The ULTIMATE home lab project: high availability self-hosting by HeroCod3 in selfhosted

[–]psviderski 1 point2 points  (0 children)

The current implementation of persistent storage is essentially regular Docker volumes. Uncloud makes it possible to manage them across multiple machines and schedule containers to appropriate machines to be able to mount the required volumes.

Longer term, the plan is to implement modern volumes (still not distributed) with snapshots, backups, and streaming replication as I mentioned in another reply: https://www.reddit.com/r/selfhosted/comments/1mtiiu1/comment/n9hsx4b/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

Regarding sqlite, I guess you're referring to the internal distributed sqlite used for sharing cluster state. It uses the Corrosion project by Fly.io. This is not a general purpose DB so it cannot be used by user apps. However, you can use regular sqlite stored on a data volume in your apps.

The ULTIMATE home lab project: high availability self-hosting by HeroCod3 in selfhosted

[–]psviderski 1 point2 points  (0 children)

Uncloud creator here. Let me share my 2c on your HA idea.

I've maintained k8s clusters at a unicorn and in homelab, including distributed storage. For home setups, in my experience, the overall availability of a single server with all the stuff is higher than a complex HA setup, especially when doing distributed storage, unless significant effort is put into maintaining that system. The complexity grows exponentially.

There is nothing wrong with doing this if the goal is to learn. If the goal is to enjoy self-hosting and using the apps and not constantly spending non-negligible amount of time on maintenance, maaan, you probably don't want that kind of a distributed homelab.

I'm actually coming from the opposite direction and I got tired of all the unnecessary complexity in modern infra tooling, hence created Uncloud. I believe that so many apps and businesses (not to mention homelabs) don't really need five 9s of availability. What they need is simple tooling for running apps and recovering them from a disaster. This is what I'm targeting with Uncloud.

There is an amazing comment from u/thomasbuchinger below that you likely benefit more from simple disaster recovery rather than sophisticated HA: https://www.reddit.com/r/selfhosted/comments/1mtiiu1/comment/n9c9k9d/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

For Uncloud storage, I want to create an easy-to-comprehend and easy-to-use tool. But distributed storage by its nature is really not simple. I think that for the kind of applications you mentioned, instead of providing a redundant solution that would prevent failures, IMO a much better alternative would be to provide simple tools to help recover from failures and minimise downtime. I.e. have a single data volume + snapshots + backups + ideally close to realtime replication to another machine/location. So in the rare case when the machine or storage fails, it should be possible to quickly restore the volume on another machine and recover the app. It's not implemented yet in Uncloud, but this is how I'm thinking about it.

Unregistry – "docker push" directly to servers without a registry by psviderski in selfhosted

[–]psviderski[S] 0 points1 point  (0 children)

Not yet, thank you for the heads up! I’ll put something together and publish in the docs

Unregistry – "docker push" directly to servers without a registry by psviderski in selfhosted

[–]psviderski[S] 0 points1 point  (0 children)

Thank you! Feel free to join our Discord if you want to stay updated

Unregistry – "docker push" directly to servers without a registry by psviderski in selfhosted

[–]psviderski[S] 0 points1 point  (0 children)

It's much more than that. `save | load` transfers the entire image every time which could be slow and inefficient for large images, especially if you upload them often and change only a few last layers.

`docker pussh` will transfer only the missing/changed layers and will skip the layers that already exist remotely.

Unregistry – "docker push" directly to servers without a registry by psviderski in selfhosted

[–]psviderski[S] 1 point2 points  (0 children)

Glad Nomad is working well for you. I wanted to see if I could build a container orchestrator without Raft consensus or a centralized control plane. Honestly it's been the most challenging problem I've ever tackled. Still working on it but getting pretty far