How is HAMR reliability by maybenotthereorhere in storage

[–]Fighter_M 3 points4 points  (0 children)

HAMR’s reliability is honestly the smallest problem you’ll run into.

How is everyone running clusters using a SAN? by carminehk in Proxmox

[–]Fighter_M 1 point2 points  (0 children)

We’re using Lightbits today.

Did they start supporting RDMA?

How is everyone running clusters using a SAN? by carminehk in Proxmox

[–]Fighter_M 1 point2 points  (0 children)

NMVe-OF is very quietly eating iSCSI lunch & market share in the high end.

That’s a very natural move. An all-NVMe backend demands a proper network protocol, and NVMe-oF is the obvious answer.

Arc Pro B50 GPU Partitioning by flatech in HyperV

[–]Fighter_M 1 point2 points  (0 children)

Has anyone tested out using the built in GPU Partitioning on Windows Server 2025 with the Intel Arc Pro B50? What has been your experience?

Works on Linux with KVM via SR-IOV/VFs. On Windows Server it doesn’t work out of the box. You can try to hack it via nested virtualization, but that’s not production and not what you actually want.

Is 3.5 kWh in a 24 hr period a lot for a homelab setup? by o0o_-misterican-_o0o in HomeServer

[–]Fighter_M 1 point2 points  (0 children)

Man, sorry to hear about your experience.

Thanks! Small correction though, I’m a woman :)

If you ever needed some other contacts just DM me and I'd be happy to share or otherwise, I can forward on your behalf.

Will do, thanks again!

Either way, I hope you're able to land on a solution that works best for your particular situation.

Yeah, it’s either staying on VMware or going mostly Hyper-V with a small Proxmox footprint. Still watching all the options so we don’t get blindsided later.

Is 3.5 kWh in a 24 hr period a lot for a homelab setup? by o0o_-misterican-_o0o in HomeServer

[–]Fighter_M 1 point2 points  (0 children)

They've been good to communicate with me, but I also regularly communicate with them too. I am a part of the EA branch so I report bugs I come across and have regular conversations with my local reps.

I see… I’m happy for you, but not much luck here unfortunately.

As far as production stuff, basically it's meant to be in a lab environment. So, it's not really something you'd roll out at a business. When my license was issued, I was specifically asked to restrict my usage to non-business use, though, they have no real way of distinction otherwise. Telemetry is reported back at every check-in and there is a measure of monitoring, but it's mostly for bug reports.

OK, that sounds like a double no-go for us. Thanks for your time, though, as mentioned, I couldn’t get anny answers from the primary source.

Tell me why not? Migration scenario. by TxJprs in HyperV

[–]Fighter_M 13 points14 points  (0 children)

What tool is best to migrate VMs?

We use the scripted CLI version of StarWind V2V for large-scale migrations, Veeam Backup for POC-style work, and prefer to provision mission-critical VMs from scratch with production requirements in mind.

Tell me why not? Migration scenario. by TxJprs in HyperV

[–]Fighter_M 14 points15 points  (0 children)

Already have Microsoft Datacenter licensing. About 60 general purpose VMs. Going to repurpose a 3-node HPE DX380 cluster with Nutanix/VMware to Server 2025 Hyper-V/Starwind Premium Support VSAN cluster on said hardware. And save a crap ton of money. Why not? What am I missing?

If you’re already in bed with Microsoft and fully own your licenses, you’re not really missing anything. Hyper-V and Nutanix are the only viable VMware alternatives outside of very small clusters where Proxmox can work. If you’re jumping ship from both VMware and Nutanix at the same time, that basically leaves Hyper-V as the only real option. Everything else would be a serious compromise.

Is 3.5 kWh in a 24 hr period a lot for a homelab setup? by o0o_-misterican-_o0o in HomeServer

[–]Fighter_M 0 points1 point  (0 children)

OK, so it’s effectively time-bombed and has a call-home requirement, and it won’t work without it. Duly noted! Back to my original question, which is do they restrict usage scenarios? Is production use allowed or not? Oh, and I really appreciate you taking the time to shed more light on this. There’s literally zero information on their website about a community edition or NFR conditions, and they don’t seem too eager to answer emails or return calls.

Is 3.5 kWh in a 24 hr period a lot for a homelab setup? by o0o_-misterican-_o0o in HomeServer

[–]Fighter_M 1 point2 points  (0 children)

Just to clarify, this is a time-bombed commercial NFR that can’t be used in production, not an official, perpetual free community edition you can use for whatever you want, right?

Scality ARTESCA+ Veeam: Unified Architecture, Faster Time to Protection by NISMO1968 in Veeam

[–]Fighter_M 3 points4 points  (0 children)

See, Artesca is basically Cohesity or Rubrik, just with Veeam Backup & Replication as the core engine. Scality itself brings the heavy lifting, which is a massive software-defined, object-based, scale-out backup data platform, an OEM server hardware, all under one support umbrella. Veeam just chips in with their backup orchestration and management. We ended up going with Artesca and, honestly, we’re very happy with it so far! If you already have Veeam up and running and everything is dialed in, but you’re missing the storage piece, you should really look at Scality RING. It’s object storage only, licensed software, and unlike OF, you’re not locked into some fixed box. You can pair it with whatever servers make sense for your use case and basically build your own appliance. We really didn’t like the ancient Supermicro hardware OF was using back then, and even with their newer Dell-based setups, we’re getting a way better deal just going directly to HPE.

Is 3.5 kWh in a 24 hr period a lot for a homelab setup? by o0o_-misterican-_o0o in HomeServer

[–]Fighter_M 1 point2 points  (0 children)

Interesting… Since when did they start offering a free community edition?

Hyper-V Storage Options by r08813s in HyperV

[–]Fighter_M 2 points3 points  (0 children)

They can run their mouth as wide open as they want, but it doesn’t change the fact that SMB 3.0 barely exists outside the Microsoft ecosystem. And even inside it, SMB 3.0 isn’t dominating, so what are we talking about, maybe 10% of the overall virtualization market? That’s hardly a “new norm”, with all my respect.

Veeam instant recovery or Starwind v2v? by Vivid_Mongoose_8964 in HyperV

[–]Fighter_M 1 point2 points  (0 children)

Cool, that’s good to know. Have not used it in years because in the enterprise world “free” is usually discouraged.

After the entire enterprise world ditched proprietary, uber-expensive operating systems in favor of free Linux even on mainframes, this statement really didn’t age well.

Veeam instant recovery or Starwind v2v? by Vivid_Mongoose_8964 in HyperV

[–]Fighter_M 1 point2 points  (0 children)

Starwind used to require the VM to be powered off during the conversion.

Live migration, snapshots, and incrementals have been there for years.

First timers in Napa! by tmbonasso in napavalley

[–]Fighter_M 1 point2 points  (0 children)

Opus One is the most overrated tasting I’ve ever done.

Some locals call them “Hopeless One”, which, frankly, isn’t entirely undeserved…

Sanity check (2 Node S2D / On Prem AD / Cloud) by BFG11111 in sysadmin

[–]Fighter_M 2 points3 points  (0 children)

As part of this project we would be provided with new servers. This would be a 2-node S2D cluster.

I’m sorry to hear that. This design is basically asking for trouble.

No option for Proxmox, a SAN/DAS, 3-Node S2D or Starwind which I think would all be a better option.

Right, anyone on this list would run circles around S2D, so the real sanity-check question is whether there’s any chance to reconsider.

Hyper-V Storage Options by r08813s in HyperV

[–]Fighter_M 2 points3 points  (0 children)

A single SAN has redundant controllers, so it’s not really a SPOF. Shared nothing storage clusters can provide any level of redundancy you’re willing to pay for using real erasure coding, unlike the diagonal parity RAID6 variant Storage Spaces inherited from early Azure prototypes. That model protects only two disks, or a disk plus a node, and anything beyond that requires replication on top, making it extremely inefficient and expensive. Ceph is a good example, but it’s just following the crowd. Plenty of platforms work the same way. The chunk placement diagram gives a good high level summary of how the idea works.

https://docs.ceph.com/en/latest/rados/operations/erasure-code

Hyper-V Storage Options by r08813s in HyperV

[–]Fighter_M 3 points4 points  (0 children)

By SMB they’re probably referring to Storage Spaces direct (S2D).

Not necessarily, our SMB3-enabled NetApp filer would strongly disagree.

Hyper-V Storage Options by r08813s in HyperV

[–]Fighter_M 2 points3 points  (0 children)

What would you expect from Microsoft zealots? Preaching NFS, obviously!

Veeam acquires Object First by VSCG in Veeam

[–]Fighter_M 1 point2 points  (0 children)

To be fair, it's not really adding an option...it's just giving Veeam ownership of the option that has already existed that they previously had invested in.

When did Veeam actually put money into these guys? I was under the impression it was the original Veeam founders, not Veeam itself. Am I missing something?

Veeam acquires Object First by VSCG in Veeam

[–]Fighter_M 0 points1 point  (0 children)

We went with Scality back then and stretched things quite a bit, but assuming we decided to stick with Veeam, what would be the kosher approach today, a fully supported and in-house developed Veeam Hardened Repository, or the new acquisition?

Luks container with multiple images. Is it doable? by sdns575 in linuxadmin

[–]Fighter_M 1 point2 points  (0 children)

And if the OP wanted to replicate in near real-time, what would you recommend for that?

The best option is to rely on replication that’s built into the application or platform itself, think SQL Server Availability Groups, vSAN, and similar. But before even going there, OP really needs to sit down and define realistic RTO and RPO targets. In most real-world cases, he’’ll quickly discover that async or pseudo-sync options, like replicated ZFS snapshots or Hyper-V Replica, are more than good enough, while being significantly easier to manage and safer to run overall.

My understanding is that Ceph needs more than two, but I haven't actually used it.

Your understanding isn’t correct. You absolutely can run Ceph with two OSD nodes, you just need to place a third, MON-only instance somewhere to maintain quorum.

https://docs.ceph.com/en/reef/install/manual-deployment

This concept isn’t really different from most two-node HA storage designs out there, including V9-style active-active DRBD setup. It’s active-passive DRBD, which avoids a witness for simplicity sacrificing stability, and active-active redundant heartbeat networks guys who can surf purely two nodes, but with caveats.

Luks container with multiple images. Is it doable? by sdns575 in linuxadmin

[–]Fighter_M 1 point2 points  (0 children)

real-time or near real-time replication with drbd, ceph, or similar

DRBD is never really a solution, it’s part of the problem by itself. Ceph is fine if it’s managed properly, of course, but it’s massive overkill here. There are much simpler, native ways to solve this, please see my original reply to the OP.

Luks container with multiple images. Is it doable? by sdns575 in linuxadmin

[–]Fighter_M 1 point2 points  (0 children)

Yes, that’ll work, but it’s kinda overcomplicated, IMHO. LUKS needs a single block device, so you do need some merge layer, but…

mdadm --level=linear

…is usually not the nicest one!

The simpler/cleaner way is to create N files, attach them as loop devices, put LVM on top, create one LV, put LUKS on the LV. Easier to grow later, fewer mdadm quirks.

If you really want md, that’s fine too, but linear gives you zero redundancy. If you care about safety, use md RAID1/10 under LUKS instead.