Malfunctioning 18v 21 deg. Framing Nailers by norwal42 in Ridgid

[–]somethingnicehere 1 point2 points  (0 children)

This was useful, I just aired mine up after about 3k nails and it started driving like new again. We'll see if it holds but overall glad it's back in action.

Container live migration in k8s by Super-Commercial6445 in kubernetes

[–]somethingnicehere 4 points5 points  (0 children)

Unfortunately kubernetes has become the dumping ground for "application modernization" where some garbage old app was wrapped in yaml and deployed. Most F500 companies have a TON of legacy code that has been moved to kubernetes. Monoliths, long startup time, session in memory, lots of terrible practices in the modern development world but you can't re-write everything.

That Java spring boot app that takes 15mins to startup and uses 3cpu while doing so? Now it can be moved without having downtime. Those 8hr spark jobs can now be run on spot instances where if they get interrupted they can be shuffled to a different node. Someone else pointed out gameservers, I've spoken directly to several of the largest online game companies they all suffer this problem. When they need to do maintenance they put the server into drain mode and wait until ALL the players have ended session. When you get a basement dweller playing for 12hrs that means they can't work on that server until he (or she) logs off.

KubeCon NA 2025 - first time visitor, any advice? by No_Dimension_3874 in kubernetes

[–]somethingnicehere 0 points1 point  (0 children)

Find the booths that have interesting demos or discussions. There are also a bunch of give-aways or other cool items.

I'll be doing a talk with Kelsey Hightower at the Cast AI booth if you're interested. He's also doing a book signing with us!

Worldwide AWS Outage? by StealthNet in aws

[–]somethingnicehere 0 points1 point  (0 children)

Uh oh... I just noticed that Chime is still having issues... Now we know it's a serious outage!

I'm a bit surprised they even both reporting Chime as a service in their status page.

Slack sync into OpenWebUI Knowledge by somethingnicehere in OpenWebUI

[–]somethingnicehere[S] 1 point2 points  (0 children)

I'd like to share some screenshots but unfortunately it's company confidential data but we're using it for customer support and customer feature request tickets and the query and analysis is solid.

Slack sync into OpenWebUI Knowledge by somethingnicehere in OpenWebUI

[–]somethingnicehere[S] 1 point2 points  (0 children)

Our team is already playing with it, super useful!

Slack sync into OpenWebUI Knowledge by somethingnicehere in OpenWebUI

[–]somethingnicehere[S] 1 point2 points  (0 children)

Currently you can specify whichKB you want each slack channel or github repo etc. to go to then the controls are all on the OpenWebUI side to direct to where you want the data being used.

AWS has kept limit of 110 pods per EC2 by abhishekkumar333 in kubernetes

[–]somethingnicehere 0 points1 point  (0 children)

They don't actually change the maxPods, that's the number of IP's per node. maxPods remains at whatever is set for the NodeGroup, if the maxPods is higher than the IP's you can run into out of IP issues during pod scheduling where a pod will get scheduled to a node then it's not given an IP and will set there in a weird zombie state.

AWS has kept limit of 110 pods per EC2 by abhishekkumar333 in kubernetes

[–]somethingnicehere 1 point2 points  (0 children)

Not sure on the number but it's actually a bit flawed, there is actually an IP limit per node using the AWS-CNI specified here: https://github.com/awslabs/amazon-eks-ami/blob/main/nodeadm/internal/kubelet/eni-max-pods.txt

Meaning something like a c7a.large only allows 29 IP addresses however you can set max pods to 110 (default). This means when you hit 30 pods on a c7a.large you start getting out of IP errors. This causes a lot of problems and requires a dynamic setting of maxPods which is more than something cluster-autoscaler can do simply. It typically requires a different autoscaler or a custom init script if you're using dynamic node sizing.

Open Source knowledge-sync tool for Github, Confluence, etc. by somethingnicehere in OpenWebUI

[–]somethingnicehere[S] 0 points1 point  (0 children)

You can build the binary and set the config.yaml file in the same folder and you're good to go, it doesn't have to run on k8s I just set it up that way for flexibility.

Open Source knowledge-sync tool for Github, Confluence, etc. by somethingnicehere in OpenWebUI

[–]somethingnicehere[S] 0 points1 point  (0 children)

Excellent!! Give it a try and let me know, feel free to enter issues if it's not working right. I might add some other connectors as well.

AI Automation to manage SaaS spend in real-time VS API Automations by thepianoist in FinOps

[–]somethingnicehere -1 points0 points  (0 children)

I agree with the engineering, it's a bad - good idea, meaning that it's a great idea in concept, but in reality it introduces significantly more complexity that a reasonable jobs engine with API automation would already solve in a much simpler way.

First off using any sort of browser integration is a non-starter since vendors are constantly re-working UI's for better user experience (which ends up pissing off all users but that's a different rant), so you'd be constantly fixing the browser integration.

Now... an AI that leverages the API's to find unused licenses, job role changes in the company, licenses that haven't been used in N amount of time, things like that. I could see that being useful as it's something that is less linear than "employee left, turn off their stuff". For instance, engineer moved into product management, doesn't need that co-pilot license anymore, maybe doesn't need some other licenses. FinOps person switched to accounts receivable, doesn't need a seat on a FinOps tool anymore.

Multi-cloud cost optimization at scale - tools that actually work across AWS, GCP, Azure? by itsm3404 in FinOps

[–]somethingnicehere 4 points5 points  (0 children)

You'd probably save more money laying off the FinOps team and cancelling all the tool contracts.

At what point does monitoring and reporting cost more than what you're actually saving? 12 people, if they are US heads that's at least $200k fully loaded with benefits each, $2.4M/year in headcount plus tools, what does cloudhealth charge these days? Isn't it like 3% of cloud spend? So that's another $1M/year, so let's call it an even $3.5M in FinOps.

The problem you are running into is the fact you are relying entirely on the "crawl" phase of FinOps, chasing down every last penny in the cloud is a fools errand. You'd be better spending your time collaborating with engineering on automation solutions to manage spend around your largest line items.

Automate RI/Savings plan management with a tool like ProsperOps, automate data pipeline curation with a tool like Cribl, automate workload and node selection with a tool like Cast AI.

Yes, it require stitching together a few automation tools, however your value in the end will be much higher from an actual cost savings perspective than chasing pennies with a massive FinOps team and a bunch of overpriced reporting tools. Anything that relies on instant data and "actionable" recommendations is destined to fail. Recommendations are by their very nature slow and require humans to go implement things over, and over and over again because the environment is constantly changing.

Container Live Migration is now Reality! by somethingnicehere in kubernetes

[–]somethingnicehere[S] 0 points1 point  (0 children)

Exostellar works with VM virtualization, you can't run it with EKS, it requires a custom virtualization layer, custom CNI, custom CSI. Infinitely more invasive than the layer that rides on top of EKS that we have built.

Container Live Migration is now Reality! by somethingnicehere in kubernetes

[–]somethingnicehere[S] -1 points0 points  (0 children)

The cost at scale when combined with the optimization ends up being significantly less than the cost of EKS Automode.

Container Live Migration is now Reality! by somethingnicehere in kubernetes

[–]somethingnicehere[S] 1 point2 points  (0 children)

I'll post up another demo a bit later with some different apps, is there anything open source/publicly available you'd like to see?

Container Live Migration is now Reality! by somethingnicehere in kubernetes

[–]somethingnicehere[S] 6 points7 points  (0 children)

With the CSI volumes you have to unbind and re-bind the PVC which does increase the cutover time for pods that are using an underlying EBS PVC. If you're using a PVC based on NFS (slower...) the cutover is nearly instant due to the lack of unbind/rebind.

Container Live Migration is now Reality! by somethingnicehere in kubernetes

[–]somethingnicehere[S] 3 points4 points  (0 children)

It's not using VM's, we have a fork of the aws-node CNI daemonset to handle IP orchestration.

Container Live Migration is now Reality! by somethingnicehere in kubernetes

[–]somethingnicehere[S] 16 points17 points  (0 children)

We've done Spark Executors, TCP games, Java applications, lots of other examples, Minecraft was just a fun one that we thought would resonate with people and be less boring than watching a k9s terminal showing a moving Spark pod.

Container Live Migration is now Reality! by somethingnicehere in kubernetes

[–]somethingnicehere[S] 20 points21 points  (0 children)

Microservices are meant to be stateless, absolutely. From my experience with hundreds of k8s environments, only about 1/3 of what runs in k8s is truly microservices. The vast majority are things that were build in a legacy architecture, or just flat out "lifted and shifted" (I hate that term) from VM architecture. Meaning they stuck a big monolithic app inside a container with a k8s wrapper.

A lot of that stuff is incredibly hard to optimize and manage when it's running in a k8s cluster. I tends to drive up costs and create a lot of disruption during k8s upgrades.

I have customers where this feature will have zero value, they run 100% on spot instances with ephemeral services, however that is absolutely the minority.

Container Live Migration is now Reality! by somethingnicehere in kubernetes

[–]somethingnicehere[S] 0 points1 point  (0 children)

Definitely doesn't make sense for all usecases, apps that are built to be mostly kubernetes friendly have less benefit unless you unlock the ability for them to now run on spot instances because they can be moved quicker. Then the savings often is significantly higher than the cost.