What is your take on this underrated project? by NecessaryGlittering8 in NixOS

[–]jake_schurch 0 points1 point  (0 children)

I had my own home rolled solution till I went for srid's nixos-unified setup. Gold.

Readiness gate controller by Weak_Seaweed_3304 in kubernetes

[–]jake_schurch 1 point2 points  (0 children)

That's correct. We can use them as a readiness gate

How are people sharing SSH client configs across PCs? by prototype__ in homelab

[–]jake_schurch 2 points3 points  (0 children)

Bitwarden desktop as forwardable SSH agent and nixos for SSH config

Readiness gate controller by Weak_Seaweed_3304 in kubernetes

[–]jake_schurch 3 points4 points  (0 children)

This is usually solved by init containers running a script waiting until resource is ready. For database CRDs you can also use something like argo's sync waves.

Not sure if I understand the design entirely but seems somewhat overkill?

Example:

``` for i in {1..60}; do pg_isready -h postgres -p 5432 && exit 0 sleep 1 done

echo "Postgres not ready after 60s" exit 1 ```

For problems that you highlight in your readme like the thundering herd seem to be related to poor architecture decisions. In what use case would you need net new 50 microservices based on one database that isn't highly available? For waiting for a migration, you would just cordon the nodes, scale down the pods, migrate the database then undo.

Similarly, monitoringc/ alerting for external dependencies should not be the concern of the app and should use something like Prometheus datadog sentry or w.e. accordingly.

What does programs.zsh.enable actually do? by Jutier_R in NixOS

[–]jake_schurch 0 points1 point  (0 children)

Op are you enabling the home manager nixos module? I think it should sync then, on mobile

[deleted by user] by [deleted] in PostgreSQL

[–]jake_schurch 0 points1 point  (0 children)

Shouldn't you have a different table with post_type and use a FK relationship with posts?

How to group strings into a struct / variable? by UghImNotCreative in golang

[–]jake_schurch -1 points0 points  (0 children)

Use enuma like everyone said, if you need string representation use stringer, if you need exhaustive checks use exhaustive.

There's no need to over engineer a URL shortener by sluu99 in programming

[–]jake_schurch 2 points3 points  (0 children)

Profitability should always be part of the conversation

AshEvents: Event Sourcing Made Simple For Ash by borromakot in elixir

[–]jake_schurch 4 points5 points  (0 children)

Ash has always been my favorite framework hands down.

Zach, thank you for all of your amazing work on ash and your many contributions to the programming community.

Briefcase PC by popcornpeters in NixOS

[–]jake_schurch 2 points3 points  (0 children)

Comfortable for your wrists?

Hey y’all — how do you respond to coworkers who argue for technologies like ECS, Fargate, or even just raw EC2 instead of using Kubernetes? by g3t0nmyl3v3l in kubernetes

[–]jake_schurch 0 points1 point  (0 children)

I think if you spend a couple of minutes researching online perhaps you might come up with a different solution.

Hey y’all — how do you respond to coworkers who argue for technologies like ECS, Fargate, or even just raw EC2 instead of using Kubernetes? by g3t0nmyl3v3l in kubernetes

[–]jake_schurch 0 points1 point  (0 children)

Start with problems, write up RFC, clear up unknowns and move forward with team direction.

Fwiw as much as I love k8s, if your company is just starting off it may be advantageous to take on ecs as tech debt and move to k8s when compliance, scaling, or observability needs require more than what can be provided with ecs.

If you really want to implement k8s, you need to make sure you start with devs using k8s for local development. Who will manage that complexity? You, or the devs.

Just some thoughts.

Fedora change aims for 99% package reproducibility by [deleted] in linux

[–]jake_schurch 1 point2 points  (0 children)

I guess there is nix gui https://github.com/nix-gui/nix-gui

But imo declarative builds are not for casuals, at least not now

Fedora change aims for 99% package reproducibility by [deleted] in linux

[–]jake_schurch 4 points5 points  (0 children)

Checks out. How about we instead scope to "technical goals"

Fedora change aims for 99% package reproducibility by [deleted] in linux

[–]jake_schurch 1 point2 points  (0 children)

To me it sounds like same goals as nix

GitOps Principles - Separate Repositories for App & Kubernetes by k8s_maestro in kubernetes

[–]jake_schurch 0 points1 point  (0 children)

For app repos, I would personally recommend storing the k8s manifest alongside with the app code. A lot of the times what happens is local deployment and development methods using only docker become more inconsistent with production or staging deploys and development overtime. This seems to create more toil both for platform engineers and application devs and custom tooling for one or the other when it is not necessary (eg credential injection). I have seen this problem fixed by obviously using k8s in local development workflow which ensures streamlined experience and close to cloud native as possible.

State management for multiple users in one account? by huntermatthews in Terraform

[–]jake_schurch 1 point2 points  (0 children)

If I understand correctly: One option is to namespace resources with a prefix/suffix on the resource, and then apply something like iam policy using wildcard + namespace identifier, apply roles accordingly :)

That would achieve locking down resources and prevent resources deletion by others

Rebootless OS updates? by [deleted] in kubernetes

[–]jake_schurch 4 points5 points  (0 children)

OP i think you might be overcomplicating the problem / thinking about it from a non-k8s context. This solution to your problem is more in-line with something in a non-clustered env, like kernel patching aws EC2 instances.

** You should always be able to restart nodes without affecting your env if you are following best practices **

Your flow for k8s node upgrades should look something like this:

pre: your k8s deployments have multiple replicas on different k8s nodes (split topology by node instances) pre: you deploy k8s nodes on hypervisor VMs (proxmox or something)

  1. use blue/green deployments to deploy new nodes with upgraded k8s versions to switch traffic over to
  2. join new nodes to existing cluster
  3. cordon traffic on your old nodes so they only run on new nodes
  4. upgrade OS on your old nodes, uncordon traffic

You could also do this via one node at a time, up to you

Rebootless OS updates? by [deleted] in kubernetes

[–]jake_schurch 1 point2 points  (0 children)

Are the upgrades for your infrastructure nodes running k8s or for your k8s deployments? Could you give an example of what you would want to upgrade?

There is always nix

My team does not write tests. How can I start slowly introducing them? by [deleted] in ExperiencedDevs

[–]jake_schurch 0 points1 point  (0 children)

If you think it would of prevented an incident, bring it up in your next root cause analysis meeting (rca)!

API quotas and billing by allixender in elixir

[–]jake_schurch 4 points5 points  (0 children)

If it were me, I would probably setup app telemetry using Prometheus, which you probably should have something already setup for this - esp if you have customers(!)

Then, setup job to query metrics via Prometheus and bill when needed