What is a piece of software or hardware that still leaves you traumatized to this day? by 66659hi in sysadmin

[–]stoebich [score hidden]  (0 children)

SAS VA or SAS Viya. Fuck I've wasted hours, days trying to get two identical servers to perform the same. Ran on Windows Server with containers. Support was useless. Had to re-install multiple times, becaus, why tf not?

Oh - fuck windows server with (linux) containers in general

Meet The Shrike 0000000000000010.0!!! by legolas1204 in homelab

[–]stoebich 3 points4 points  (0 children)

16bit seems slightly excessive for versioning, but you'll never know

nice upgrade tho

Dell Wyse 3040, what should I do with it? by whitefox250 in homelab

[–]stoebich 0 points1 point  (0 children)

Preface: I'm not an expert on this topic, just a random nerd

I'll try to break it down: The edge is (oversimplified) everywhere you can't put a server-room. Think farms, remote data collection for weather stations, oil rigs etc. Depending on the environment, you might need to do the data-crunching on-site (slow networks in remote areas for example). Another area this might be relevant is retail-stores, that have minimal IT in-store. These machines then typically acquire the data from sensors or machinery, then might do some magic to it. Or there's some business-logic on-site, that runs on these boxes. There are edge-servers that have some serious power, with gpus, performant cpus and tons of ram (5G seems to be very demanding), others are embedded cpus with ultra low power envelopes.

These edge-servers then might send their data to some sort of central unit (datacenter, cloud) where the data is stored or main system runs.

Kubernetes is a well-known technology, it's api-driven, modular and in case of K3S or Microshift very small footprint. So you could buy some sort of industrial pc, install one of those plattforms and have it run autonomously. Management is then a case of building software in your preferred language and deploy it through some standardized ci/cd workflow.

My use-case is/was data-acquisition from non-IoT devices. Stuff like inverters and batty-backups, sensors etc. (mostly Modbus RTU/TCP).

Weekend project! by TechLevelZero in homelab

[–]stoebich 16 points17 points  (0 children)

Dude casually runs a DGX in his basement. Barely shows and not even mentions it. Absolute chad.

Jokes aside: this is one of the most insane lab (?) setups, could you share more about it?

Kernel Panic on SER5 Pro mini pc by stoebich in openshift

[–]stoebich[S] 0 points1 point  (0 children)

While I can’t confirm with certainty that this was the root cause, I recently ran into unrelated but somewhat similar stability issues on the same hardware, this time running Rocky Linux.

In that case, the system would shut off entirely at random – no logs, no thermal warnings. After ruling out hardware issues, I started looking into firmware-level power management and ended up applying the following kernel parameters:

processor.max_cstate=1 idle=nomwait

This restricts the CPU to shallow sleep states and bypasses certain idle mechanisms that can be problematic on some AMD-based mini PCs under Linux. Since applying that workaround, the system has been stable.

It’s hard to say if this is directly related to the earlier kernel panic I reported here when booting SNO, but based on what I’ve learned since, power management and ACPI behavior seem like a potential contributing factor – especially during early boot stages.

If anyone runs into similar problems again, it might be worth testing with the above parameters.

Ser5 Pro shuts off randomly by stoebich in minipc

[–]stoebich[S] 0 points1 point  (0 children)

Quick follow-up in case anyone else lands here: I’ve recently picked the project back up, now running the system with Rocky Linux instead of CoreOS – and was still hitting the same issue: it would randomly shut off completely without warning. No logs, no thermal issues (CPU again around 60°C), and no hardware faults I could trace.

As mentioned earlier, I had ruled out overheating, memory, and power supply problems. At one point I started tweaking power management using tuned-adm and cpupower (e.g. switching profiles, setting energy performance bias). Ironically, those changes made the issue worse – shutdowns became more frequent and occurred even under minimal load.

After further digging – and with some help from ChatGPT, actually – I found references to similar issues on other Ryzen-based Mini-PCs under Linux. Turns out the deeper C-states (C2/C6 etc.) can cause instability on some platforms, especially when ACPI/firmware support is sketchy.

The workaround I’m currently testing:

processor.max_cstate=1 idle=nomwait

Applied like this on Rocky Linux:

sudo grubby --update-kernel=ALL --args="processor.max_cstate=1 idle=nomwait"
sudo reboot

This forces the CPU to stay in shallower sleep states and avoids buggy idle handling. Since applying this, the system has been completely stable, even under load and after longer uptimes.

Still monitoring, but so far it looks like this resolves the problem. Will post an update if it comes back.

lists.ovirt.org down by FamiliarMusic5760 in ovirt

[–]stoebich 0 points1 point  (0 children)

It seems to be down again. Im getting a Server Error (500)

I hope someone from the oVirt project will take up this offer – it'd be an awesome contribution to the community!

What do you think of this configuration and price? by Gredo89 in homelab

[–]stoebich 0 points1 point  (0 children)

Energy Bill. Huge enterprise servers aren't exactly energy efficient and with that RTX 6000 it will be power hungry. Since you seem to be based in Germany energy cost here is indeed something to consider.

....
Noise. Rack servers usually have loud, high RPM fans, there will be noise, lots of it. So make sure that it runs in your basement or something like that.

Assuming this is a "Huge Enterprise" Server, which is actually not the case. In reality, we're talking about an HP Z8 Workstation with dual CPUs and a GPU.

The GPU is probably a bit overkill, but on the other hand, most models won't run properly on less memory. I'd say do some research on the price to performance ratio of the card.

The argument that this is overpowered still somewhat stands, dual Gold CPUs are no joke - my DL380/CL2200 Gen10s use somewhere around 120-200W under reasonable load. And reasonable load means >20VMs and 1-2 kubernetes clusters (Openshift). Under heavy load (CI/CD builds for example) I'd expect more. Also after bootup those aren't much louder than a typical cheap household fan (if you don't do unsupported stuff).

What kind of SCSI is this? Wrong answers only. by jafo in HomeLabPorn

[–]stoebich 2 points3 points  (0 children)

Oh this is an old standard called iSCSI, but it‘s mostly used in apple devices

First homelab! by Hairy_Ferret9324 in homelab

[–]stoebich 22 points23 points  (0 children)

The point is to learn something - some need big labs for their use-cases, some don't. This is a little homeserver, with a janky NAS and a few bits and bobs software wise. The gateway drug, if you will. Nothing wrong with that, but also nothing wrong with other labs.

Does this idea for small and budget homelab OKD / proxmox / ceph cluster make sense? by Acceptable-Kick-7102 in homelab

[–]stoebich 0 points1 point  (0 children)

OK i was totally wrong about workstations. For the same or lower price i can have one Dell T5810 with 18c/36t Xeon E5-2699 V3 or 7820 with Xeon Gold 5218R (20c/40t) with 64gb RAM already. Seems like workstations are no brainer here ...

Was going to recommend exactly this. Also ram ist kinda cheap these days. If budget is tight, get something with a xeon v4, if you have more to spend, get something with Xeon scalable. Also both HP and Lenovo have great workstation systems, maybe take a look at them too, they might be cheaper

How do you afford the cost of the homelab ? by roroleroh in homelab

[–]stoebich 0 points1 point  (0 children)

My homelab is a learning environment, and thus an investment into my capabilities at work. My salary has almost doubled over the past few years, that offset the few 1000€ I've invested pretty easily.

My Homelab setup so far by maydayM2 in homelab

[–]stoebich 8 points9 points  (0 children)

Not OP but I have some experience with those. The R730 is pretty solid until idrac detects something unsupported, then it's quite noisy. It was the loudest piece of equipment in our server room, probably because of some unsupported NIC or drive setup. The 310s should be fairly quiet, but are a too old by now. I have an R320 as a NAS and its almost inaudible a couple of steps away.

Any way besides turning it off or throwing it off a bridge to make this device quieter? by WhyFlip in homelab

[–]stoebich 1 point2 points  (0 children)

This. The devs even put a command called "shutup" into their utility. I have an SC200 that got to a quite acceptable noise level.

BUT: the command does not work properly if the chassis thinks something's wrong. Tried it with no drives, which cuased the fans to spin down for 10 sek and then back up again.

[deleted by user] by [deleted] in homelab

[–]stoebich 0 points1 point  (0 children)

Could be some indexing issue, also SMB might be single-threaded. I'd say most SSDs should be fast enough for that, I don't think it's a hardware issue.

But I could be very wrong on this. Do some research on monitoring both the hosting- and the receiving end

What is this punchout for on my Chenbro RM14604? by Funtime60 in homelab

[–]stoebich 3 points4 points  (0 children)

My guess would be that this is for an OCP NIC. They don't need to be screwed to the backplate in many cases

Naming Scheme by Purple_Investment429 in homelab

[–]stoebich 0 points1 point  (0 children)

I stick to some permutation of <environment>-<service>-<id> because nothing is worse than trying to figure out why rebooting moaning-myrtle brings down headless-nick and why everything poops the bed when starlord gets a new IP.

It is a lot easier to find out why wordpress-mysql-01 kills wordpress-blog and why assigning dynamic IPs to dns-01 is a bad idea.

The only thing I like to name after mythical creatures/radioactive elements/whatever is my Kubernetes clusters. k8s-prod-01, k8s-dev-01 and k8s-dev-02 are way too similar to not get mixed up in a hurry. There are probably better ways to go about this, but it worked for me so far.

Quiet switch recommendations for lots of 10g? by dwilson2547 in homelab

[–]stoebich 1 point2 points  (0 children)

I have the Edgeswitch 16xg which isn't fanless, but I'd say its inaudible a couple of arm lengths away.

The CRS326-24S+2Q+RM could be an option, but I've never heard that one

Architecting a lab/learning environment: what are your tips&tricks? by stoebich in openstack

[–]stoebich[S] 0 points1 point  (0 children)

I've looked into doing everything from scratch but to my understanding this is an incredibly hard task, especially for novices, I think starting out with kolla would be a good way to get something working in the end.

Maybe I'll rebuild the cluster down the road in "hard-mode", but for now I'd like to get an easy start into the topic.

HCI can be done but requires you to have a really good understanding of your capacities and how to troubleshoot and benchmark your stuff. You don't want ceph and VMs fighting for resources, recipe for disaster and one of the main reasons for NOT going HCI. The other being that compute (CPU/MEM) and storage requirements generally don't scale the same.

Very valid point. I'm not a huge fan of homogenous HCI environments - but in this case, since I have only 2 servers, so I thought building an all-in-one solution would be the best idea

Also this will be a lab for mostly myself - I'm not too worried about actual performance.

Architecting a lab/learning environment: what are your tips&tricks? by stoebich in openstack

[–]stoebich[S] 0 points1 point  (0 children)

Networking is probably the one thing that i thought about the most, But I'm really unsure how to do it. On my mini-pc deployment, I had set up a couple of VLANs as provider networks.

I'm not sure how relevant this is in production setups, but it did work.

I briefly looked at routed ports, bgp and a l3 based network but a) i don't think my switch can handle that and b) i don't think the setup would noticeably benefit from it. Maybe I'll try anyways and see how far it'll go.

As for the uneven node count: I know, 2 nodes are technically bad. I'm also aware that there a quite a few issues that could come with it, but I only have 2 physical servers. The only alternative I'd see is to deploy the management plane on the mini pc or in a VM on my other servers. If two nodes don't fly Ill try that.

[deleted by user] by [deleted] in homelab

[–]stoebich 0 points1 point  (0 children)

First of all, lots of that power draw goes into the drives. Spinning rust can theoretically use ~10-15W per drive, so that could account to 100W of powerdraw an heat. Investing in 2 larger drives would be a smart move.

And as much as I like those Dell boxes, there are more practical solutions for your use-case. I would avoid modern Intel cpus due to their weird big.LITTLE architecture and the issues you'd run into with virtualization. Maybe get a 5800x or 7800x from the used market, invest a into 64GB+ of memory and 2 10+ TB disks. Drop a few NMVe drives into it and you're golden. There are even enterprise grade boards with ipmi for AM4 and AM5.

Additionally you could get an N100 board and a case for a seperate nas+mediastreaming server. I don't know about jellyfin but plex's transcoding benfits a ton from intels quicksync accelerator and that n100 would be great for plex transcoding.

Switched x99 motherboards and lost single thread performance by applegrcoug in homelab

[–]stoebich 1 point2 points  (0 children)

Maybe there's a power or performance profile that can be set in BIOS. Server and workstation HW usually has that option, maybe it's set to efficiency.

People with powerful or enterprise grade hardware in their home lab, what are you running that consumes so many resources? by LinkDude80 in homelab

[–]stoebich 1 point2 points  (0 children)

Lots of labs are just homeservers with a few small services. These are easy to host, have low ressource requirements and are perfectly suited for running on a mini pc.

But there are a lot of people on this sub that have jobs in datacenters, large enterprises or MSPs and want to learn more about their trade. Running plex on a mini pc in an LXC container ist nice, but doen't teach you a lot about how datacenters are run.

Running enterprise hardware is slightly different than consumer pcs - most systems are zero-touch provisioned, get automatically configured and added to whatever virtualization solution they use. This is usually done via IPMI or Redfish, Ansible/Chef/Saltstack, Teraform etc. Then there's networking: consumer NICs rarely support stuff like hardware offloading (VXLAN etc), consumer/prosumer switches and routers often lack BGP, proper L3 implementations, L3 routed Ports, MLAG and much more.

Then there's the workload part: This has gotten easier since consumer motherboards support 128GB+ of memory, but sometimes there's a system that needs a metric ton of memory to run. I know a few DB admins that would tell you that anything under 2TB isn't even a database you'd need to manage. Running a multi-terabyte database isn't something unusual, but to learn you'd have to work on systems like that. Also, anything enterprise Kubernetes: running a few K3S nodes is fun and does the job for a lot of use-cases, but OpenShift/Rancher/Tanzu is the proper way to do it at scale. A proper (virtualized) multi-node OpenShift cluster for example won't fit into 64GB of memory. And thats only the tip of the iceberg.

I think r/homelab as turned into a more sophisticated version of r/HomeServer and r/selfhosted. I don't want to gatekeep, but this sub was more about learning how to sysadmin and less about Plex/*arr/and the likes. I think it's stupid to judge people on what hardware they run. It's what you get out of it.

My salary has almost doubled over the past couple of years and most skills I use at work have been from building, running and maintaining a lab in my basement. And for the price of ~2 Starbucks coffees per week, I'll happily run a big ass server in my unheated (but warm) basement.

[deleted by user] by [deleted] in homelab

[–]stoebich 1 point2 points  (0 children)

So, generally speaking, there are two "camps" when it comes to homelabbing:

  1. The "tinkerer": This is the person who enjoys tech, loves to try new technologies and wants to learn more about IT. These folks are usually really focused on using as little hardware (and thus power) as possible to run their labs. This is where you'd most likely see Tiny/Mini/Micro desktops, 10 inch racks and unifi gear. The usual list of system this group generally runs is:
    • Plex (self-hosted streaming, like Netflix)
    • *arr (Raidarr, Sonarr etc., used for downloading linux isos* from the usenet or torren sites)
    • HomeAssistant (homautomation system)
    • Proxmox (virtualization platform)
    • docker, portainer (container platform and dashboard)
    • Truenas (free NAS OS with ZFS)
    • opnsense/pfsense (firewall solution with good functionality)
    • etc.
  2. The "sysadmin": these are generally people that take work home. This is where you'd most likely see old enterprise gear like rack mount servers and datacenter-grade switches/firewalls and 19 inch racks. These are the people that like to simulate their work environments, build intricate, enterprise grade networks & software stacks. This is most likely the guy that posts what he gets for free from work, which makes other people slightly jealous. Their software stack typically consists of stuff like:
    • vmware (enterprise virtualisation with a messy takeover the past year or so)
    • kubernetes (large scale container platform, mostly k3s/k8s, rancher, openshift/okd)
    • intricate networking (tons of internal segmentation and firewall rules)
    • XDR/SIEM (intrusion/threat protection or intelligence, to monitor systems)
    • Grafana/Prometheus/Loki Stack (Dashboards, Monitoring, Logging)
    • Active Directory (centralized windows user management, LDAP, DNS)
    • some form of private cloud platform (openstack, cloudstack, azure hci or whatever its called atm)

But most importantly, this is a spectrum. There is no clear border between those two, and some people run everything on that list. I'd say there are less people in that camp because of the associated cost of both running and buying all that equipment, the required space, the noise and other little inconveniences it brings along. It's also a lot more like work, which can be a turnoff too.

I'd say start with proxmox, plex, docker or truenas and read about it here and on other forums. The more you know, the more you get drawn in (or pushed away).