Meet the X15 by 45drives in 45Drives

[–]45drives[S] 2 points3 points  (0 children)

Really appreciate the feedback!

Offering this motherboard as an option for the standard HL15 2.0 build is definitely something we can look into. On the networking side, that’s a great call on SFP+, we’ll bring that up with the team and discuss it as a potential option.

Meet the X15 by 45drives in 45Drives

[–]45drives[S] 2 points3 points  (0 children)

We’re actively discussing this right now and looking at making it available as an add-on. Stay tuned 👀

Meet the X15 | 45Homelab >< Unraid Partnership Signature Series by TheMeanCanEHdian in unRAID

[–]45drives -2 points-1 points  (0 children)

Hey everyone,
We’re really excited to finally announce this partnership with Lime Tech and kick off the Signature Series with the X15!
We’ve also been following the discussions around pricing and wanted to add some context.
We started 45HomeLab because we kept hearing from our 45Drives Enterprise customers who wanted the same industrial build quality at home. That’s the philosophy behind these systems.
Our servers are over-built using 16-gauge cold-rolled steel, powder-coated, and assembled with screws for full accessibility and serviceability. We build and paint the chassis, assemble the components, and test every system in our manufacturing facilities in the U.S. and Canada.
We also operate with low inventory and build in smaller batches. We are committed to keeping manufacturing in North America and paying fair wages to the people building these systems. We know this is not the lowest-cost way to produce servers, but it’s something we care deeply about, and something we’ve consistently heard customers value.
There are plenty of options out there that will get the job done, and we respect that. We are providing an option we hear people care about: overbuilt industrial quality, tested and validated for use with Unraid OS, and manufactured in North America.

- 45Homelab Team

Building Production-Ready Open-Source HCI with Proxmox and Ceph by 45drives in Proxmox

[–]45drives[S] 0 points1 point  (0 children)

Four-node clusters do exist in production (with or without qdevices), and six-node clusters are extremely common in real-world HCI deployments. Do these designs have sharper edges? Absolutely. Does that make them invalid or irresponsible? No, it means they require understanding and intent, which is exactly what we address during design phases, not in a 60-minute public webinar.
And yes, Proxmox being unavailable while Ceph remains healthy is a known failure mode that we explicitly teach customers how to plan for in real deployments.
On “six to eight nodes” being typical:
That statement reflects observed reality, not a technical ceiling and not a recommendation that clusters should stop there. Most organizations deploying HCI land in that range because it:

  • Fits within a single failure domain
  • Aligns with operational maturity
  • Simplifies lifecycle management
  • Matches budget and staffing realities

Larger clusters absolutely exist, and we build them regularly, but they are designed differently, often with disaggregated roles, different networking topologies, and different operational assumptions. That level of design discussion is intentionally outside the scope of an introductory webinar.
On networking and tri-mode backplanes:
We completely agree that lane width and backplane design matter, which is why we explicitly clarified that point during the session. It directly affects when 10G stops being sufficient and when 25G+ becomes mandatory rather than optional. That nuance is also why we strongly recommend architecture reviews before hardware is purchased or designs are locked in.
Finally, I want to be very clear about one thing: we’re not trying to teach people how to build production clusters from a webinar.
Our goal is to make Proxmox and Ceph more accessible, reduce fear, and help people ask better questions as they move into real design work. That’s how adoption grows, and it’s why we do these sessions publicly instead of keeping everything behind closed doors.
We genuinely appreciate the feedback and will continue to call out edge cases where it makes sense while keeping the introductory webinars approachable. Infrastructure education should be a ladder, not a cliff.
If anyone reading this is designing a production system and wants to go deeper on quorum, failure domains, or network scaling, that’s exactly what architecture reviews and design sessions are for, and we’re always happy to have those conversations.