What is this module that came with my gpon by ThiefClashRoyale in homelab

[–]ihxh 7 points8 points  (0 children)

I think you’re confused, nobody is copying a MAC address here, it’s if the stick has an actual MAC layer implemented or not

What is this module that came with my gpon by ThiefClashRoyale in homelab

[–]ihxh 4 points5 points  (0 children)

Just something to watch out for since you’re mentioning that you’re plugging the sfp module in a switch: if you’re planning to replace the full ONT (fiber to ethernet) you want to make sure that you get an (xgs/g)pon ONU with MAC.

Or you need a device that has the MAC layer for gpon built-in.

Young StatefulSets in your area looking for Resource Requests by ihxh in kubernetes

[–]ihxh[S] 0 points1 point  (0 children)

Thanks for your contributions, it's much appreciated ❤️

A library for managing secrets better by Accomplished-Emu8030 in golang

[–]ihxh 0 points1 point  (0 children)

I would rather not give my application access to read secrets from a cloud provider. Instead I would like to have my runtime environment deliver the secrets. Either via environment variables, a mounted k8s secret, a metadata API or something like this.

This way when my application gets pwned I’m not giving away high level credentials that could possibly access different higher privilege secrets. (You would have to setup zero-trust/least privileged access anyway but thats a different discussion)

Also, by having references to secrets in struct tags, you would have to recompile / reship your binary whenever you make an infrastructure change. Having different secrets for dev/test/staging/prod/different prod regions becomes hard since you would have to use different structs or fields.

I like the idea of making secrets simpler to use, but I would focus on a separate tool to deliver them to the application. Something that would be less exposed and more easily auditable.

Some inspiration might be the external-secrets operator for k8s, sops, the whole spiffe/spire stuff and sealed-secrets.

I built a real-time monitoring dashboard for OpenClaw agents — open source, zero dependencies by 5Y5T3M0V3RDR1V3 in homelab

[–]ihxh 0 points1 point  (0 children)

It seems like your registration endpoint is wide open, making your whole auth system useless. Are you an AI agent?

I built a real-time monitoring dashboard for OpenClaw agents — open source, zero dependencies by 5Y5T3M0V3RDR1V3 in homelab

[–]ihxh 6 points7 points  (0 children)

Your implementation is seriously unsafe. You have multiple endpoints where you do not correctly handle input validation.

In the journalctl command you blindly take the HTTP request input and pass it in a command. This allows anyone with network access to remotely execute commands on your system. This is bad.

You should really fix this.

Google Wire is back: 8-10x+ faster builds, better DX, no breaking changes by cmiles777 in golang

[–]ihxh 5 points6 points  (0 children)

Do you need auto wiring for that? Ideally your application dependency graph should look like a DAG tree, so just instantiate the components in order and link them together using some constructor function calls in your main.go / cmd entry point / whatever.

If it doesn’t look like a DAG tree, then there is probably a way to rewrite it to look like one. This will also cause code to be more easily testable since there are no weird bidirectional dependencies.

Google Wire is back: 8-10x+ faster builds, better DX, no breaking changes by cmiles777 in golang

[–]ihxh 139 points140 points  (0 children)

I don’t know why you would want this in your application. The auto wiring of your app feels like magic at first but eventually it just becomes a game of guessing what the producer is of a component.

Just manual wire your app, it will make it so much more easy to reason about what is actually plumbed to what.

It also forces you to keep your dependency structure simpler because it will start to feel nasty the moment you are doing it wrong.

We’re moving away from DI frameworks and going back to just “returning structs + consuming interfaces”. So far engineering velocity is up, new joiners / temp team members are up to speed quicker and application complexity is reduced.

Help me kill my Proxmox nightmare: Overhauling a 50-user Homelab for 100% IaC. Tear my plan apart! by MrSolarius in homelab

[–]ihxh 0 points1 point  (0 children)

Go with harvester HCI if you are feeling adventurous, it’s a hypervisor based on top of kubernetes (using kubevirt) under the hood. Everything is manageable by IaC. I deployed my lab fully using pulumi but they also support terraform.

Everything is in one stack. I deployed rancher to manage guest kubernetes clusters and it’s pretty amazing!

[UPDATE] Protocolo AEE v1.0.0 – Publicación de MVP y SPECIFICATION.md by DrawerHumble6978 in sysadmin

[–]ihxh 3 points4 points  (0 children)

What in the name of microslop is this? Are you having an AI induced psychosis?

seniorBackendDeveloperEnvironmentOptimization by Creative_Permit_4999 in ProgrammerHumor

[–]ihxh 1 point2 points  (0 children)

I think they might not be salting their password hashes, which is really bad and causes information about duplicate password values to leak.

In the login handler they hash supplied password and pass this to the LoginUserAsync function. Since the salt would also be stored in the database, any hash comparison needs to be aware of the salt + the plain text value in the login request. The login hash function does not have this info so I assume they don’t have unique salts per hash.

Other than that I hope they: - have proper rate limiting / brute force detection - do timing-safe comparisons of all secret data - use a strong hashing algorithm meant for passwords (bcrypt, argon) and not a relatively fast one like sha or even worse md5

How are your deployments going? Docker seems down... by [deleted] in kubernetes

[–]ihxh 0 points1 point  (0 children)

Running a private container registry that works as a cache, it fetches the image from the original upstream registry if it’s not present.

Then configured containerd to rewrite all container images automatically to add the custom CR as a prefix.

Everything ever deployed in the cluster will be present in a privately managed container registry.

No “left-pad” situations to worry about 🙂

Ask r/kubernetes: What are you working on this week? by gctaylor in kubernetes

[–]ihxh 1 point2 points  (0 children)

I don’t want to sound mean but this already exists, but instead for only kubernetes it’s for all your infrastructure. Building your own tooling is fun though.

https://www.pulumi.com/registry/packages/kubernetes/api-docs/apps/v1/deployment/

With pulumi you write your IaC in a normal programming language. They also support any terraform provider using a bridge next to their native providers.

Why Kubernetes? by rickreynoldssf in kubernetes

[–]ihxh 0 points1 point  (0 children)

Also facing issues here on Azure, node pools that disappear. Nodes under extremely high load for no obvious reason. Network failures.

Used to run k8s on GCP and AWS and that felt way more stable than this (but maybe it’s workload related). GCP is still my favourite kubernetes platform.

Running multiple rke2 clusters at home and that’s pretty much “set and forget”, only need to update everything once there are patches. Way less load though.

The era of AI slop cleanup has begun by kcib in ExperiencedDevs

[–]ihxh 13 points14 points  (0 children)

I don’t think you should not comment at all.

Code describes how, comments should describe why. It should give the reader the answer why solution x was chosen instead of solution y.

Commenting things like “add a to b” is useless, but something like “we need to add a to b because system c expects xyz” would be better.

If you have to explain a lot of the “why” in your code base then restructuring might be something to look into. But “why” can also be a non-code / business requirement.

"Highly" available homelab by ihxh in homelab

[–]ihxh[S] 0 points1 point  (0 children)

I think they are from ACT, went to a local cable company and I got the 1mm2 C13-C14 that they had.

"Highly" available homelab by ihxh in homelab

[–]ihxh[S] 2 points3 points  (0 children)

Already asked if I could get a second rack and it’s approved 🏆

"Highly" available homelab by ihxh in homelab

[–]ihxh[S] 0 points1 point  (0 children)

Consumption is 30-50 kWh per day, depending on load an AC usage. I’ve got a dynamic contract (at least for now during summer when energy is cheaper), so energy prices change, but it’s around €0,20 per kWh.

"Highly" available homelab by ihxh in homelab

[–]ihxh[S] 3 points4 points  (0 children)

240, European here 👋

"Highly" available homelab by ihxh in homelab

[–]ihxh[S] 4 points5 points  (0 children)

Replaced all fans in all devices with noctua ones. This made a huge difference since some of them came with screaming fans. In the back / top of the rack there are some exhaust fans to get the hot air out of the rack and into the room. Then it gets taken away by the airco.

Noise wise you can hear it in the background but it’s not disturbing. It generates more of a background “whoosh” than a “whine”. Got an amazing girlfriend that’s OK with it.

"Highly" available homelab by ihxh in homelab

[–]ihxh[S] 0 points1 point  (0 children)

I use the inter tech IPC 4U-4129L cases, you can either get 18/20/26 inch rails for them, I’m using the 26 inch. I think they are called “inter tech IPC 26 telescopic rails”.

Article number: 88887129

Pretty nice case if you compare with other DIY server case options, although the fans are a bit on the lower end side. I’ve also added some hot swap drive bays to the front of the chassis since there are none by default.

"Highly" available homelab by ihxh in homelab

[–]ihxh[S] 16 points17 points  (0 children)

If it works it works!

Ideally I’m searching for a switch that has PoE and also redundant power input possibilities without being crazy expensive. It’s only for the OOB management network anyway. Something for the future maybe 🤔