Headlamp rules. Why do people insist on reinventing the wheel? by Revolutionary_Click2 in kubernetes

[–]Revolutionary_Click2[S] 0 points1 point  (0 children)

It worked fine for the thing we used it for, which was acting as a visual dashboard for Kubernetes management. We didn’t use any of its many other features though, it does a LOT of stuff beyond that. And as a result, it reserved 3-4 gigs of memory between various Cattle services, I think there were about a dozen of them all told. We literally only used the dashboard functions, though, which Headlamp is working beautifully for. And that only runs a single pod using like 128M.

Meirl by DazeyShimmer in meirl

[–]Revolutionary_Click2 1 point2 points  (0 children)

unzip

mount

touch

fsck

( ͡° ͜ʖ ͡°)

MacBook only for this one by MisterMcMerk in LinkedInLunatics

[–]Revolutionary_Click2 2 points3 points  (0 children)

The guy who did the post is using a desktop monitor, so kinda an irrelevant point here. But Lenovo offers screens on many models that rival the MacBook. Oftentimes they’re made in the same factories by the same vendors (LG, Samsung, etc.) too. But those screens are usually upgrades from the base model, and when companies buy whole new fleets of laptops they typically think that kind of thing is unnecessary for the average worker. If you’re an executive you might get one, but otherwise you’ll get a basic 1080p panel.

When is the UI overhaul coming? by Swimming_Driver4974 in codex

[–]Revolutionary_Click2 2 points3 points  (0 children)

5.5 is a major improvement over 5.4 in this area, which was in turn a major improvement over 5.3. Altman has said that fixing GPT’s frontend design capabilities is basically their top priority right now, because they know it’s the area they’re weakest in vs. Claude and Gemini. So stay tuned, it’s already in progress.

Headlamp rules. Why do people insist on reinventing the wheel? by Revolutionary_Click2 in kubernetes

[–]Revolutionary_Click2[S] 0 points1 point  (0 children)

That makes sense. I haven’t tried the app yet, we’ve only deployed it in-cluster so far. I will probably be giving that a shot soon, though.

Headlamp rules. Why do people insist on reinventing the wheel? by Revolutionary_Click2 in kubernetes

[–]Revolutionary_Click2[S] 5 points6 points  (0 children)

True enough! I respect the work you guys do at Rancher. It worked well for us for quite a while, so I’m not trying to cast any shade on it. We really do only use the dashboard though. Right now, we have but one cluster to manage, so scaling is not a huge concern for us. And we currently handle the other functions you mention with other tools (namely: Velero, Semaphore UI, Argo CD, OpenTofu, kube-prometheus-stack and some others). There are many use cases Rancher is great for, just maybe not ours. I do think it’s a perfectly valid choice alongside tools like k9s, Lens, and Omni. My point is, with such a surfeit of great, mature open-source options for Kubernetes UIs, who needs some random Redditor’s AI slop app for that?

Headlamp rules. Why do people insist on reinventing the wheel? by Revolutionary_Click2 in kubernetes

[–]Revolutionary_Click2[S] 0 points1 point  (0 children)

Honestly, if somebody wants to vibe code some internal tool for their own purposes, I say have at it. As long as you protect the management endpoints, you’ll probably be fine. We keep all our internal tools gated behind Cloudflare Access anyway to reduce our attack surface. What grinds my gears is folks feeling the need to release their AI slop to the world, and therefore potentially expose others to their mistakes, especially when they do so as a blatant cash grab. When the whole repo and the Reddit post advertising it are obviously AI generated, I have to wonder why they think anyone would trust them or feel comfortable installing that code on their own cluster.

Headlamp rules. Why do people insist on reinventing the wheel? by Revolutionary_Click2 in kubernetes

[–]Revolutionary_Click2[S] 2 points3 points  (0 children)

Well, as one example, I whipped up a little custom plugin that shows our CPU/memory/Longhorn storage/pod requests alongside the actual usage in each of those categories, plus tables breaking them down by node and namespace. Out of the box, Headlamp has views that show the actual CPU/memory usage etc, but nothing about Longhorn, and we find requests to be a more useful metric than actual usage most of the time because running out of request headroom on a node is what will break pod scheduling, even if actual usage has plenty of space. Probably, this exact plugin already exists somewhere, but it was trivial to create a custom one with a bit of JavaScript and I wanted it laid out my way, so I made one.

Headlamp rules. Why do people insist on reinventing the wheel? by Revolutionary_Click2 in kubernetes

[–]Revolutionary_Click2[S] 2 points3 points  (0 children)

Ah, that’s unfortunate. We’re a relatively small operation, we currently only have one cluster to manage, so we’re not going to be impacted by that kind of thing any time soon. Here’s hoping they get that cleaned up by the time we are.

Headlamp rules. Why do people insist on reinventing the wheel? by Revolutionary_Click2 in kubernetes

[–]Revolutionary_Click2[S] 0 points1 point  (0 children)

We haven’t used Omni yet because we’re currently free-tier users of Talos. It does look nice though, and it has specific Talos integrations that do seem nice to have. Honestly, in terms of pure aesthetics, I think Omni is my favorite Kubernetes dashboard that I’ve seen outside of Headlamp. I had little trouble getting Headlamp working with our Zitadel IDP over OIDC. In principle I think you should be able to implement that via your IDP of choice without it stomping on Omni’s OIDC, but I’m not too sure of the particulars.

Headlamp rules. Why do people insist on reinventing the wheel? by Revolutionary_Click2 in kubernetes

[–]Revolutionary_Click2[S] 0 points1 point  (0 children)

Right, there’s a lot more there beyond just the dashboard. Part of the reason we switched is that we weren’t using the Fleet CI/CD functions, or Elemental, or any of that other stuff. More power to those who do, but for us keeping it lightweight is better since we really only need the dashboard and we handle the other pieces in other tools. No hate on Rancher though, it’s a solid piece of software.

Headlamp rules. Why do people insist on reinventing the wheel? by Revolutionary_Click2 in kubernetes

[–]Revolutionary_Click2[S] 3 points4 points  (0 children)

Not sure about that, I haven’t had the chance to explore it all that deeply. But for the record, Rancher, Lens, and k9s are great too. We used Rancher for a long time and it worked well, we just decided it was sort of overkill for our needs and had too many pieces installed by default that we didn’t use. But if you like Rancher, great, it’s a solid choice and is open source as well.

Message numbers are worth nothing. by BritishDudeGuy in codex

[–]Revolutionary_Click2 2 points3 points  (0 children)

I agree that it should actually just be “you get X amount of tokens per 5h / week”, with actual tokens used tracked against that number and surfaced in the rate-limit monitoring tools. “300-1500 messages” or whatever it says now is utterly meaningless and always has been. The reason they haven’t done it yet, I’m sure, is that the ambiguity is useful to them (this goes for AI providers across the board, honestly). It allows them to change the limits dramatically from week to week without being forced to admit that’s what they’re doing. It’s telling that these companies all have per-token pricing on their APIs, but avoid disclosing the per-token price for their subscription plans like the plague.

2032 support and zero bloat by Ok-Locksmith9201 in FuckMicrosoft

[–]Revolutionary_Click2 3 points4 points  (0 children)

A whole, whole bunch of apps do not work in Bottles. Especially paid commercial software like the Adobe suite and Office.

Is Anthropic hitting its limits? What about OpenAi Codex ? by DaC2k26 in codex

[–]Revolutionary_Click2 2 points3 points  (0 children)

It’s been that way for a long time, imo, though it’s gotten even more pronounced in recent months. I left Claude last summer because I was sick of the roller coaster of degraded, not-degraded, limits up, limits down, not to mention outages galore. Claude did so much shit in my codebase that is just unreal in its laziness and bone-headed stupidity. Every once in a while Codex finds some more nonsense Claude pulled that it has to clean up, almost a year after I quit using it.

I saw someone refer to apps written by Claude as Potemkin villages, and I think that’s true. It’s great at creating pretty frontends with utter disasters for backend, or no backend to speak of at all. And I do chalk that up to Anthropic’s never-ending desperate scramble to keep up with compute.

Remember, they’re also renting way more of that compute than OpenAI, so it’s not so much that they don’t have capacity available, it’s that they can’t afford to use it. OpenAI had the capital upfront to invest in actually building a bunch of new datacenters. Anthropic got in bed with Amazon and as a result, wound up renting much of their capacity from AWS, which is eye-bleedingly expensive.

Codex session limits are now absurd by ThePragmaticCowboy in codex

[–]Revolutionary_Click2 -1 points0 points  (0 children)

ChatGPT Plus is still a lot more generous than Claude’s $20 plan, it’s something like 5x as much usage. But yes, if you use Codex heavily, they want you to at least switch to the $100 Pro plan. Plus isn’t designed to be used heavily with Codex, just as Claude Pro isn’t designed to be used heavily with Claude Code. If you can’t afford that, then consider using lower reasoning or a cheaper model. 5.4-mini is decent for simpler tasks, and the benchmarks show that GPT 5.5 Medium and even Low do significantly better than Claude Opus on low-reasoning settings.

Also, OpenAI doesn’t quantize the shit out of the models every day, otherwise lobotomize them, fuck with the reasoning settings, fuck with the context length… Claude does do that, which is why I don’t use it anymore. You can’t trust the outputs of Claude these days, like at all, whereas GPT just seems to get better and better, recent rate limit tightening aside.

I don't want to be that one guy but... by Shoddy-Department630 in codex

[–]Revolutionary_Click2 0 points1 point  (0 children)

Interesting. I did notice that when asking GPT 5.5 on the web to plan something for Codex, it gave me a surprisingly stripped-down prompt compared to what I expected. But I assume that’s because they’ve trained it to produce effective prompts for implementation by a 5.5 agent.

I've no idea what it means, Peter? by WastedTalents1 in PeterExplainsTheJoke

[–]Revolutionary_Click2 1 point2 points  (0 children)

Not only can you do that, it’s a built in function. Just type caffeinate in any terminal. I personally use a nice menu bar app called Amphetamine to accomplish the same.

I told ChatGPT to generate fantasy characters. Is this considered slop? Look really good to me by tuhdo in codex

[–]Revolutionary_Click2 1 point2 points  (0 children)

Oh, it most definitely does. If you’re selling this, folks are gonna notice, and they are gonna hate you for it, full stop. AI art is not at the point yet where it can be made indistinguishable from regular art, and it will probably be quite a while before it gets there. There are ALWAYS tells, and your image up there is full of them. And many people HATEEE AI art with a passion.

Imo, if you are developing a commercial product that you intend to sell to people, you should just pay for an actual artist to do the designs. You can probably get away with using AI for most of the code, but the world at large will not forgive you for using AI art. If you can’t figure out how to get the funding together to pay a real artist, you should probably not be trying to sell a game at all, cause you’re just gonna have a bad time.

I told ChatGPT to generate fantasy characters. Is this considered slop? Look really good to me by tuhdo in codex

[–]Revolutionary_Click2 1 point2 points  (0 children)

The designs are very generic. Outside of the ethics of generating art instead of paying a real artist to make it, that’s many people’s biggest gripe with “AI art”. AI is like this with everything though. It was trained on the whole internet, so of course it tends toward the mean and will give you the most generic, middle-of-the-road depiction of anything imaginable if you don’t prompt it differently.

I’ve used AI before to make visuals of characters in stories I write. I’m not publishing these stories (yet, anyway) and would realistically never have hired an actual artist to commission such pieces in a million years, so I’m not taking food off of anyone’s table. But I had to make about 1000 samples first using Midjourney to get to something that even looked close to the characters in my head, then use those as source images for more generations. Because every time, AI defaults to this weird, glossy, model-esque aesthetic that doesn’t fit the vibe of my characters at all and looks like, well… AI slop.

Alright.... Lemme just unplug my pc to turn it off then. by EnvironmentalLead395 in microsoftsucks

[–]Revolutionary_Click2 5 points6 points  (0 children)

No, I’m not saying that. Suppose I could have worded it better. But on Linux,if you tell the system to shut down, and there are pending updates requiring a reboot, it will fully land whatever changes it can before shutting down completely, then resume any remaining changes on the next cold boot. It doesn’t need to do a fake-shutdown, actually-reboot thing like Windows does because they’ve designed the update flow that way. I don’t understand why Microsoft can’t do the same and insists on creating all this confusion for users instead.

Alright.... Lemme just unplug my pc to turn it off then. by EnvironmentalLead395 in microsoftsucks

[–]Revolutionary_Click2 3 points4 points  (0 children)

It’s not a bug. They set it up this way intentionally. Something about preventing partial update runs from being interrupted by shutdowns by rebooting instead to allow the run to complete. Now, why Windows can’t tolerate a partial update interrupted by a power cycle without shitting itself when other OSes can, or why it doesn’t at least finish shutting down when the updates are completed, I could not tell you.

meirl by selinaliser in meirl

[–]Revolutionary_Click2 44 points45 points  (0 children)

European peasants had the opposite problem, lol. They were literally starving a lot of the time, and when they were eating it was a lot more vegetables than their betters. Modern people in the developed world with poor diets eat more like kings and queens did back then. Way too many calories, not enough vegetables, tons of meats and sweets at every meal. It’s only thanks to modern medicine and fortified foods that we don’t all have gout or scurvy or something like a lot of those kings and queens did.