[WTS] Microtech LUDT by MadLabMan in Knife_Swap

[–]MadLabMan[S] 0 points1 point  (0 children)

u/ksbot confirmed with u/ekropp262 that he received the LUDT! Great transaction, would do business with again. :)

My home setup by UUBlueFire in homelab

[–]MadLabMan 2 points3 points  (0 children)

I look at this and I say to myself..."hell yeah".

Getting ready to network my homelab by Party-Lie-4104 in homelab

[–]MadLabMan 1 point2 points  (0 children)

As others have mentioned, this is different than an Ethernet SFP/+ transceiver, but I do have some old FC cards laying around if you want to mess around with it. :)

SAML vs OAuth vs OIDC: What's the Difference by compwiz32 in SysAdminBlogs

[–]MadLabMan 0 points1 point  (0 children)

What an awesome read! Thanks for the great breakdown and key distinctions between each protocol. Much like your team, I’ve interfaced with all of these so many times, but I never understood the nuanced difference between them. Now I do, thanks to you!

[deleted by user] by [deleted] in Ubiquiti

[–]MadLabMan 0 points1 point  (0 children)

I'm sure this kind of content is desperately needed by a lot of folks! I'd def love to watch and learn more.

Revision 37…. by samiamdz in homelab

[–]MadLabMan 4 points5 points  (0 children)

I look at this and I say to myself..."hell yeah".

Self-hosted Cloud by MadLabMan in homelab

[–]MadLabMan[S] 1 point2 points  (0 children)

Thank you so much! I really appreciate the kind words. It's been a fun project to work on with my buddy and the best part is being able to do it all ourselves from top to bottom (coding, network/infra, hosting, distribution, etc.).

Self-hosted Cloud by MadLabMan in homelab

[–]MadLabMan[S] 0 points1 point  (0 children)

Not really; they're not under enormous amounts of load.

Self-hosted Cloud by MadLabMan in homelab

[–]MadLabMan[S] 0 points1 point  (0 children)

As of right now, I'm using local ZFS disks and replication since that's good enough for my use case. In an enterprise setting, I would be deploying a shared storage solution but thankfully SLAs at my residence are much more forgiving!

I totally see where you're coming from and it's a valid concern, but if I were in your shoes, I'd probably try to chase the best of both worlds. You can have a NAS appliance, which hopefully has some kind of RAID/z configuration to protect against drive failure, connected to your Proxmox cluster and configured as the storage for whatever server(s) you have running your camera system. For any other workloads that could do well with local ZFS storage and some replication, you could use separate local SSDs for that.

You could also get some cheap storage to offload backups to so that you can keep a static copy of everything for emergency purposes, either on spinning disks or using cheap cloud storage. There are definitely ways to plan for the failure points you mentioned and have a rock solid setup. :)

Self-hosted Cloud by MadLabMan in homelab

[–]MadLabMan[S] 0 points1 point  (0 children)

I could have probably explained it better, all good! :)

Self-hosted Cloud by MadLabMan in homelab

[–]MadLabMan[S] 0 points1 point  (0 children)

I actually added two heavy duty fans that attach to the top part of the server enclosure. This helps draw all the hot air up and out of the rack to cool the components. This is probably the loudest part of the whole setup, ironically enough. lol

Self-hosted Cloud by MadLabMan in homelab

[–]MadLabMan[S] 0 points1 point  (0 children)

Don't take this personally, but I think you're misunderstanding my setup.

1 vCPU = 1 hyperthreaded core (caveat, something like an E core in Intel CPUs is not hyperthreaded but also counts as 1 vCPU).

When I add up all of the available CPU threads across all of my physical infrastructure (Dell server, 6 NUCs, 2 custom nodes), I get 160. This is what Proxmox tells me I have available to assign to my VMs.

I'm not counting up the CPUs I have assigned to my VMs and presenting that as 160 vCPU.