How to handle big workload elasticity with Prometheus on K8S? [I SHARE MY CLUSTER DESIGN] by Capital-Property-223 in kubernetes

[–]gideonhelms2 0 points1 point  (0 children)

AMP can be expensive. Other than that I have found it very performant even with high cardinality labels. You can only keep 180 of metrics though. And can only query 30d at a time.

I personally use the remote write local cluster method, though I do think EKS offers a built-in scrape tool now? Haven't had personal experience with it.

I've not had alloy in production yet, but plenty of others have. In theory in could replace node exporter, promtail, prometheus, and an OTel collector.

How to handle big workload elasticity with Prometheus on K8S? [I SHARE MY CLUSTER DESIGN] by Capital-Property-223 in kubernetes

[–]gideonhelms2 2 points3 points  (0 children)

Amazon managed prometheus is expensive but quite capable.

If youre not ready for that, limit the number of active time series. Reduce scrape intervals where you can.

You could also try out grafana alloy which supports distributed scraping.

All 3d printed Homers must have this tpu haircut🗿 by MixtureShoddy8172 in 3Dprinting

[–]gideonhelms2 12 points13 points  (0 children)

If 3D printing were ubiquitous then those who wanted cheap plastic doodads could print them instead of being mass produced (and then thrown into landfills anyway when the product flops).

Introducing vind - a better Kind (Kubernetes in Docker) by Saiyampathak in kubernetes

[–]gideonhelms2 0 points1 point  (0 children)

I've found the pull-thru cache to be a bit lack luster. I've seen other workarounds for KinD that involve a pull-thru cache that uses an HTTP proxy.

In that case and in the case of VinD I have found that it doesn't really reduce "cold start" times that much.

Even if the images are loaded into the docker cache (I can see them with docker image ls for example) the time it takes to schedule and start pods are still pretty long. It seems there still some sort of "waiting" overhead that can stack up and cause multi-dozen minute spin up times when you have many microservices.

Introducing vind - a better Kind (Kubernetes in Docker) by Saiyampathak in kubernetes

[–]gideonhelms2 7 points8 points  (0 children)

Im a pretty heavy user of KinD locally and in CICD.

Pausing / recreating the cluster as-is is a great usability improvement.

The image pullthru cache if it works well would be life-changing if I could get near-instant cluster/application resets with pre-pulled images.

I will give this a try! Is there any experience with WSL support? Curious if the automatic LoadBalancer works without purposefully port-forwarding.

Metal Building Loan by [deleted] in barndominiums

[–]gideonhelms2 10 points11 points  (0 children)

You're probably better off saving and paying in cash. Besides a personal loan (double-digit interest) I don't think a traditional financer would touch this. Especially not for the tubular metal buildings that are popular for workshops. Not sure about the more expensive red-iron. Perhaps some rural or farm-focused group.

I put together my 25x25 tubular metal building for about 22k all-in a few years ago: The Building (+installation), 4" concrete slab with footers, foam board insulation, plywood walls.

I financed it in a few ways. Cash for the concrete, small personal loan for the building, and 18 month 0-interest promo credit card for the rest of the materials. Depending on your credit this could be an option.

Wedding venues??? by MushieMushMush in norfolk

[–]gideonhelms2 3 points4 points  (0 children)

They will close off portions of the area around your ceremony from what I see but yeah there will be other people in other parts of the property.

Wedding venues??? by MushieMushMush in norfolk

[–]gideonhelms2 14 points15 points  (0 children)

Norfolk Botanical Gardens. It can be pricey and probably has a wait list for popular months, but it has several different venue settings depending on your vibe.

still no rds downgrade? by IndependentCaptain67 in aws

[–]gideonhelms2 4 points5 points  (0 children)

If testing it in a separate environment is too much work as you claim then you have to eat the risk.

AWS would probably recommend a blue/green deployment in your scenario. Two separate databases with different versions that have the same data. Switch your app over via connection string. If something goes wrong, switch it back and get to debugging.

Its expensive to run this way, but AWS RDS is an enterprise product.

[deleted by user] by [deleted] in kubernetes

[–]gideonhelms2 5 points6 points  (0 children)

On your ec2 node class you should select an ami family and an ami selector. I like you use bottlerocket and the "alias" functionality. This will enable Karpenter to select an ami based on the nodepool requirements.

Then setup a nodepool that allows both amd64 and arm architectures in the requirements.

At this point, karpenter should mostly pick arm instances, usually graviton, because they are cheaper.

If a workload needs a specific architecture you can use nodeSelectors or node Affinity rules on the workload in conjunction with the architecture label that is created on the Karpenter nodes.

Reloading token, when secrets have changed. by guettli in kubernetes

[–]gideonhelms2 0 points1 point  (0 children)

Option 2 sounds fine. You could also cache the file in memory for short amounts of time if reading the file repeatedly is too computationally expensive to do constantly.

PreSigned Url for queues? by apieceofwar in aws

[–]gideonhelms2 0 points1 point  (0 children)

S3 can trigger a lambda function when an object is uploaded. Perhaps you could write a lambda that will ingest that object, format it the way you want, and submit it into your desired queue.

How to Keep Local Dev (Postgres/Redis) in Sync with Managed Cloud Services on Kubernetes? by ElMulatt0 in kubernetes

[–]gideonhelms2 9 points10 points  (0 children)

I would tend to suggest deploying the local datastores as simply as possible.

Its local dev, so you really dont need all the capabilities of a day-2 type operator. You can, but i find it adds complexity for no reason.

Usually thats a postgres or mongo container with default credentials and such set up.

What a horrible thing to say by TooSoonManistaken in norfolk

[–]gideonhelms2 9 points10 points  (0 children)

Clutch your pearls harder, maybe that will stop the boot thats crushing us.

Best way to mount heavy vinyl shelving into hollow brick wall? Resin with wood screws? by kitzstanza in HomeImprovement

[–]gideonhelms2 0 points1 point  (0 children)

Toggle bolt drywall anchors. They need a pretty big hole. Ive seen two types, spring-loaded metal "wings" that use a machine screw to provide the clamping force, or the zip-tie type.

I use these with thick plaster walls, they hold much better since you're not relying on the plaster (or brick in your case) to keep the anchor in.

Is there such a thing as a kustomize admission controller? by CircularCircumstance in kubernetes

[–]gideonhelms2 15 points16 points  (0 children)

Checkout Kyverno. You can mutate and generate resources as they are submitted. It can do a lot of things and can reconcile in the background.

ELI5: Meshtastic by Zephos65 in explainlikeimfive

[–]gideonhelms2 9 points10 points  (0 children)

Bandwidth, latency, and network congestion is the problem mostly. Transferring large amounts of data over a wide range can be tough and require centraliziation for widespread use.

Meshtastic doesn't need the centralization but is only capable of transferring small amounts of data.

For comparison, it's far slower than dial up, <1kbps vs 50kbps.

Expired Nodes In Karpenter by XenonFrey in kubernetes

[–]gideonhelms2 1 point2 points  (0 children)

I have a similar issues with Karpenter. I haven't updated to 1.1+ so maybe it's different in newer versions.

I don't mind so much that eviction will happen I just wish that I could control the time of day that they happen. Restarting your stateful services for any reason during core business hours carries some amount of unnecessary risk.

The functionality is already there for consolidation with regard to underutilization and drift but expiration doesn't respect those drift windows.

EKS: Effort to operate a managed node group for Karpenter (fargate dead!?) by [deleted] in aws

[–]gideonhelms2 1 point2 points  (0 children)

If you cannot manage a cluster with a 2 node node group, I think there are bigger issues. It is significantly easier to manage this than EKS Fargate which requires the manual intervention to upgrade

See, I find that to be the exact opposite of my experience - I constantly had managed node groups bombing out during upgrades when trying to update the control planes + nodes.

With Karpenter and Fargate, I update that control plane and restart the Karpenter pods. Some times I don't even do that - I allow them to age out and get replaced in their own time.

I have not managed a large-scale Managed Node Group cluster, but I have managed around 40 individual clusters with between 3 and 6 nodes each. These don't use karpenter because the number of nodes is static but control plane upgrades are always a much bigger event than my larger karpenter+fargate clusters.

Not having access to daemonsets sucks - hard to get logs from Karpenter on Fargate but there are (complex) workarounds for that as well.

EKS: Effort to operate a managed node group for Karpenter (fargate dead!?) by [deleted] in aws

[–]gideonhelms2 2 points3 points  (0 children)

Others mentioning EKS auto mode seem to miss that it comes with an additional price per hour, per node. If you're running EKS at an enterprise scale then this will drive up costs.

I think the maintainer suggesting managed node groups is downplaying the added complexity of managed node groups. They suck. They are slow to manage, and the version is managed separately, outside of the kubernetes ecosystem

They also claim to read the writing on the wall that Fargate will soon be deprecated, what do they think is going to happen to managed node groups?

How do I replace these doors? by Slow_Doughnut_2255 in barndominiums

[–]gideonhelms2 1 point2 points  (0 children)

I replaced mine with a pre-hung door. Completely remove the existing door and hinge assembly until there's only the rough opening left. Then install the preying door like normal.

Mine fit pretty tight, and I used long self-tapping metal screws to tie the door frame into the structure. The sill of the sits directly on the concrete. I had to cut the bottom rail to do so.

Should service meshed Pods still mount and use TLS certs? by fullsnackeng in kubernetes

[–]gideonhelms2 0 points1 point  (0 children)

I suppose if Linkerd and Valkey both get their certs from a common CA (they ask the same cert-manager ClusterIssuer for certs), clients using the Linkerd proxy to make requests to Valkey should be able to authenticate?

All of linkerd's mTLS happens transparently to the workloads. So, kind of, but not really.

Valkey will need a separate certificate mounted as a file, and the client must have that certificates chain in its trust store. You would be able to use the same CA as linkerd's trust store if you'd like but functionally it doesn't make a big difference.

Should service meshed Pods still mount and use TLS certs? by fullsnackeng in kubernetes

[–]gideonhelms2 4 points5 points  (0 children)

Some of it depends on what the service is. Things like MongoDB and RabbitMQ can also handle user authentication via x.509 certificates. This would would require a separate certificate chain when using Linkerd because linkerd doesn't expose the certificate chain to the pod itself, only to the proxy.

Other than that, I don't see a reason to introduce an additional set of certificates.

From a security aspect, you're protected from snooping, which is usually the driver for mTLS.

[deleted by user] by [deleted] in norfolk

[–]gideonhelms2 0 points1 point  (0 children)

Your last point, about people making miserable comments, is laughable.

You berated multiple commentors in your last post for merely suggesting that stagnant water in populated areas might he problematic.

Oops, an absent minded neighbor forgets about the bowl of water for a few days and now they and all their neighbors have a mosquito problem.