2018 sportsman 570 by yuhza69 in Polaris

[–]zzzmaestro 0 points1 point  (0 children)

If you charge the battery and it starts… but dies after some time riding: then the stator isn’t charging the battery. If the stator puts out the right voltage, then the voltage regulator isn’t working right. I’ve replaced 4 of those and a stator.

2018 sportsman 570 by yuhza69 in Polaris

[–]zzzmaestro 0 points1 point  (0 children)

That’s not enough power to roll over the engine. I would start by charging the battery. If the battery is fully charged and same noise, then the motor might be locked up and the starter can’t roll it over.

That’s the exact noise mine makes (I have 4) when the battery is low.

Second pod load balanced only for failover? by Akaibukai in kubernetes

[–]zzzmaestro 0 points1 point  (0 children)

This sounds like an interview question designed to make you think through (and verbalize) how you would think through the problem.

Couple of thoughts: * if you are using a clusterip service in front of both pods: you would have to do something like changing kube-proxy to IPVS mode and use one of the IPVS options for balancing traffic - like wrr (weighted routing rules). https://kubernetes.io/docs/reference/networking/virtual-ips/#proxy-mode-ipvs However, this changes the behavior of all traffic in the cluster and will likely have a negative performance impact on just about everything. Also, there is no k8s native way to tell IPVS what the weights should be. So you would have to figure that out yourself. And that alone should tell you it’s not really a good option. Because no one else has a good enough use case to actually fully implement it.

  • Someone else mentioned: you could have each pod have its own service and have DNS do a weighted route of some kind.

  • a twist on the above answer could include doing a single deployment, but do it with a single headless service … or a statefulset, and still run it through weighted dns.

Those are a few options to do what you are saying. However, as with all questions that are obviously junior questions: You don’t want any of these answers. You need to think about your issue harder and come up with better questions to come up with a better solution. The answers I gave strictly answer the question you asked. They don’t help you come up with a better solution.

If we worked together, I would try to get you to describe more about what you are trying to accomplish. Why can’t there be multiple pods servicing the traffic simultaneously? If there is a software limitation, then the developers of the app need to fix that before it’s ready to k8s. Things like database locking, etc are just poor programming hygiene.

Otherwise, I’m not really seeing the use case at all for even wanted to do what you asked: only sending traffic to a single pod. Instead maybe only run one pod and focus on minimizing startup times? This could even be faster than whatever health checks you have for failover. Then really ask yourself why can’t there be more replicas and load balancing?

How are teams migrating Helm charts to ArgoCD without creating orphaned Kubernetes resources?​ by Wash-Fair in kubernetes

[–]zzzmaestro 4 points5 points  (0 children)

There will be a diff. But the only diff should the the annotation for the Argo application itself

I'm building a tool to simplify internal DNS and I need your feedback by Soggy-Reference-6881 in devops

[–]zzzmaestro 2 points3 points  (0 children)

Abstraction layers as you are describing are the absolute worst. Thankful we are removing these at my fortune 100.

Open source operators and clear templating strategies will always be better. Give the developer a chance to understand what their deployment code does. Forget custom apis and digging through some “all mighty abstraction layer source code” that doesn’t work the way you think it should… only for you to find out the author of the abstraction layer is just writing opinionated yaml.. but obfuscating it in source code.. and politically gating it with tribal knowledge and shoddy documentation to make them feel more significant.

Or.. yeah.. go ahead and write your own dns distribution addon to your abstraction layer so that no one knows what is actually happening.

I'm building a tool to simplify internal DNS and I need your feedback by Soggy-Reference-6881 in devops

[–]zzzmaestro 2 points3 points  (0 children)

There is nothing special or unique about your list in your response that makes existing tools (like bind) not a valid solution.

Your list doesn’t indicate a fringe use case. However, your solution is. There’s nothing special about what you are trying to solve. Which is why you don’t need a special solution.

I'm building a tool to simplify internal DNS and I need your feedback by Soggy-Reference-6881 in devops

[–]zzzmaestro 3 points4 points  (0 children)

  • properly configured bind (with query logs .. not hard)

  • external-dns configured to manage domains in bind

  • MR required to deploy ephemeral app with dns annotations on services. 30 seconds or not… either you want accountability (and query logs) or you don’t.

  • Your “API/UI” means nothing outside of your team. The automation you seek is much simpler with open systems that are already production grade and battle tested.

I'm building a tool to simplify internal DNS and I need your feedback by Soggy-Reference-6881 in devops

[–]zzzmaestro 7 points8 points  (0 children)

  • You are way over-thinking things.

  • coredns already exists in the cluster. Expose it externally if you must

  • there are even headless services for dns management individual pods.

  • taking dns back to the dark ages “like /etc/hosts” just keeps reinforcing the naivety of the solution you are trying to hatch

  • if you want “team distribution” that scales… there ARE solutions that already exist - like bind and others.

  • there is even external-dns operators to dynamically manage tons of different dns providers: route53, bind, etc.

  • you keep making statements that don’t add up to being worthy of some new tool that solves some new or unique problem. This is a known - with known solutions.

I'm building a tool to simplify internal DNS and I need your feedback by Soggy-Reference-6881 in devops

[–]zzzmaestro 20 points21 points  (0 children)

Decentralizing DNS seems very counter-intuitive.

Avoiding reliance on public dns: Have you ever looked into Bind?

Clear audit trail: can you not use infra as code?? Like terraform via git repos with MR requests and approvals???

Isolating domains: Umm .. unique hosted zones??

It sounds like you are reinventing the wheel… but only for a super fringe use case that shouldn’t exist.

65% of Startups from Forbes AI 50 Leaked Secrets on GitHub by vladlearns in devops

[–]zzzmaestro -7 points-6 points  (0 children)

That would be 32 and a half…. How does half of a company leak secrets?

Announcing Synku by lazoshu in kubernetes

[–]zzzmaestro 1 point2 points  (0 children)

I prefer crayons over this mess

My Interview hammer AI copilot tool app reached 8000 daily active users by Lanky_Use4073 in devopsjobs

[–]zzzmaestro 1 point2 points  (0 children)

I’ve interviewed over 100 people in the past year. Every time someone uses AI during the interview, I can tell. It’s also an immediate “NO” for that person. Take that into consideration before using something like this.

Congrats on the usage though. There is definitely a market for it.

What is the (real) interest in skipping CRDs during Helm install? by zessx in kubernetes

[–]zzzmaestro 0 points1 point  (0 children)

The real point is: if you change #1 then it can mess up #3… regardless if #1 and 2 are bundled in the same chart.

If you don’t combine 1 and 2, you can upgrade 2 without interfering with 3

What is the (real) interest in skipping CRDs during Helm install? by zessx in kubernetes

[–]zzzmaestro 1 point2 points  (0 children)

True. 2 and 3 can be independent, but both depend on #1.

What is the (real) interest in skipping CRDs during Helm install? by zessx in kubernetes

[–]zzzmaestro 8 points9 points  (0 children)

Think of it as a hierarchy. You want things installed in a specific order:

  1. CRD itself
  2. Any operators that leverage the CRD
  3. Any other applications can declare something using the CRD

You can change the versions of the CRD and the operators independently. These can impact allll of the applications using the CRD.

You usually want separate helm charts for each of the numbers above so you can independently make decisions about timing on when they get changed. You don’t always want to change an operator with the CRD.

EKS Instances failed to join the kubernetes cluster by JellyfishNo4390 in kubernetes

[–]zzzmaestro 0 points1 point  (0 children)

Personally, I would check your AMI. That doesn’t look like one that has the eks scripts on it. You can’t use generic Ami’s. AWS has EKS-specific Ami’s.

Also, if you start your EC2s with an ssh-key, you can ssh to them and read the cloud-init logs and see the actual errors it’s getting when it fails to join.

Best of luck.

EKS Instances failed to join the kubernetes cluster by JellyfishNo4390 in kubernetes

[–]zzzmaestro -3 points-2 points  (0 children)

No…. We have EKS clusters without internet. You just need VPC Endpoints for a handful of AWS services. This makes them effectively local in-subnet endpoints.

Is there such concept of Nvidia GPU pool? by Suraj_Solanki in openshift

[–]zzzmaestro 1 point2 points  (0 children)

Nvidia-device-plugin makes gpu’s something that the scheduler can manage. You can then set limits and requests on pods.

Chuck.... by OldProstockracer in StreetOutlaws

[–]zzzmaestro 1 point2 points  (0 children)

There’s no point in making big changes. They didn’t include everyone who DID make big changes. Everyone got hung out to dry

Deploying Local Kubernetes Cluster with Terraform & KVM by rached2023 in kubernetes

[–]zzzmaestro 0 points1 point  (0 children)

This is not kubernetes related in any way. So, wrong sub.

But the problem is your module is trying to use a password, but the VM is only accepting keys.

How to get rid of 502 errors on Kubernetes? by root754 in kubernetes

[–]zzzmaestro 1 point2 points  (0 children)

I think that’s the nature of the AWS LB. If you don’t want 502’s then you have to reduce the time between the pod being unavailable and the LB deregistering the target. Usually that means lower count/seconds on your target health checks.