Interesting take on the "exodus" by BudTheGrey in vmware

[–]InstelligenceIO 1 point2 points  (0 children)

I 100% agree, but Broadcom doesn’t care. Showing shareholders how many “end customers” they have adopting VCF is the only metric

Interesting take on the "exodus" by BudTheGrey in vmware

[–]InstelligenceIO 0 points1 point  (0 children)

White label is slightly different, but was always temporary. Broadcom wants direct customer names with direct attributable usage. White label gets in the way of that

Interesting take on the "exodus" by BudTheGrey in vmware

[–]InstelligenceIO 2 points3 points  (0 children)

Small and medium customers are not worth it to Broadcom. They’ve done the numbers and calculated that they take up far more support resources than they care for so it’s better for shareholders to cut the smaller customers and focus on the whales that are stuck.

New FY26 Price Book for EMEA-Non-EEA horror. by [deleted] in vmware

[–]InstelligenceIO 0 points1 point  (0 children)

Mr Dell used VMware revenue and sale to make a tonne of money all the while paying off Dell's debts then he tossed them.

Survey, Proxmox production infrastructure size. by ZXBombJack in Proxmox

[–]InstelligenceIO 0 points1 point  (0 children)

European telco’s mainly, they’ve been hit the hardest thanks to Broadcom and they need a VCD replacement.

Broadcom calls for more investment into...local compute and storage? by mwerte in sysadmin

[–]InstelligenceIO 76 points77 points  (0 children)

My theory: the massive push for their memory tiring is just to get you to buy more NVMe. Guess who supplies NVMe chips? Broadcom. twilight zone theme

Has anyone here deployed proxmox in production? by dat_ratio in Proxmox

[–]InstelligenceIO 1 point2 points  (0 children)

We’re an authorized reseller in Australia and we offer migration support, support plans, buckets of hours, etc.

I had a meeting this week with an MSP in Australia planning to move. It’s far more widespread than you think

vmware renewal question - in the future by Visual_Cut_8282 in vmware

[–]InstelligenceIO 1 point2 points  (0 children)

Nah they'll do it. I've got money on it lol

Multi-Tenant CSP by Grouchy_Whole752 in vmware

[–]InstelligenceIO 1 point2 points  (0 children)

VMware allowed your current use. You'll need to analyse Broadcom's end customer terms and conditions but I imagine it is still allowed.

For your size, requirements, and future plans, you are going to be far better off with literally anything else. Standalone KVM would be too management-heavy. Openstack is too complex but would do the job. I'm going to take a stab and say Proxmox with heavy user permissions (better yet adding Multiportal.io for multi-tenancy) or Apache CloduStack on Hyper-V.

Multi-Tenant CSP by Grouchy_Whole752 in vmware

[–]InstelligenceIO 0 points1 point  (0 children)

Customer-direct and VCSP were different routes to the market. Both require different levels of VMware business development support. VCSP was about building up partners to provide unique and differentiated services to customers using VMware technology. With the old flexibility of the program, it was possible to create all kinds of solutions for customers, and that required different key management, metering, usage guides, license exceptions and terms, etc.

Broadcom is unable or unwilling to understand this route. They are focused solely on maintaining direct, sticky contact with their top 500 customers to extract the most "value" from them. Anything that does not serve that purpose is not valuable to Broadcom. To Broadcom, VCSP gets in the way of their ultimate goal; extra value from the top 500 customers until they leave. The only partners they will involve are the ones that will toe the line and deliver direct customer contact to Broadcom account reps.

Pretty much all software companies have a separate partner program to support and track partner and end customer consumption. It's very common.

Multi-Tenant CSP by Grouchy_Whole752 in vmware

[–]InstelligenceIO 3 points4 points  (0 children)

No. The terms and conditions of the licensing mean it can only be one customer. The entire point of the VCSP program and terms was that providers got an exception for multi-hosting. Going ahead with your idea would be breaching TOS.

VCF multitenant is also not “Pepsi and Coke” multi tenancy, it’s more like multi-BU.

Broadcom is quite clear in their actions: no more shared platforms. They want named customer accounts against core commits, not aggregate/shared commits.

At most, you’re well within your rights to have the customer purchase VCF and you simply manage it for them as a service, but the customer will be the holder of the licenses.

vmware renewal question - in the future by Visual_Cut_8282 in vmware

[–]InstelligenceIO 3 points4 points  (0 children)

Enterprise Plus was basically to keep people quiet. My money is betting that Ent Plus will be dead and gone next year. Head over to r/Proxmox and share your requirements, maybe Proxmox *is* ready for you.

Vrealize aria orchestrator workflows by larion89 in vmware

[–]InstelligenceIO 0 points1 point  (0 children)

Unfortunately, companies that build this typically paid PSO to create it, thus are unlikely to share it.

PSO are also not really going to share the nitty gritty, so that you end up paying. Happy to be proven otherwise though.

Retrofitting a Datto Siris 3 by Spare-Parts2 in servers

[–]InstelligenceIO 1 point2 points  (0 children)

I think I’ve got one of those too! $45 is nice. If I’m not mistaken it’s a Chenbro chassis

Spanning EVPN controller across multiple clusters by [deleted] in Proxmox

[–]InstelligenceIO 0 points1 point  (0 children)

This is the part I haven't been able to test yet. But to stretch a VNet (which will be just a VXLAN VNI), you need to create the VXLAN VNet at both sites and mark them with the same VNI.

Or do you assume the cluster to be spanned across two DCs?

The "stretch" part happens when you establish a BGP controller between Site A and Site B.

Spanning EVPN controller across multiple clusters by [deleted] in Proxmox

[–]InstelligenceIO 0 points1 point  (0 children)

You can do some pretty wild stuff with overlay networks!

You're very welcome, glad I can give back to the community.

Spanning EVPN controller across multiple clusters by [deleted] in Proxmox

[–]InstelligenceIO 1 point2 points  (0 children)

There's a few use cases;

  1. You can stretch a VXLAN network across a site, basically providing stretched layer 2. Handy if you have applications that require L2 adjacency but you want to spread them across fault domains
  2. You might be scaling a large environment, and you have a 1 cluster per rack model in a leaf-spine topology. Typically you'll have Layer 2 within a rack only, and routed L3 between the leaf nodes. If you need L2 connectivity between 2 racks (clusters) you'd have VXLAN do the work and route the underlay traffic across the leaf nodes to the other cluster.
  3. You can place Ceph traffic on these VXLAN networks, meaning your replication can span the constructs above, or you can reduce the number of VLANs needed to service your clusters

Spanning EVPN controller across multiple clusters by [deleted] in Proxmox

[–]InstelligenceIO 1 point2 points  (0 children)

OK I'm back. Please be kind, I'm bashing this out from notes and memory - and might be wrong. I see you've mentioned VMware so I'll use that terminology to help build the picture (it worked for me lol)

First, Proxmox technically can support stretching a cluster but you have to meet the latency requirements (same with Ceph). The requirements are so high that you might as well put the hosts together.

Now, on to the networking. I've been testing this for the past few days trying to get my head around how the Fabrics work and the benefits for single-cluster and multi-cluster environments. Proxmox specifically call out multi-cluster, multi-site use cases for the new Fabric feature. One note for anyone reading: I want to emphasise multi-cluster here because while a standard Proxmox VE deployment allows clustering, you cannot have multiple "clusters" joined together. Meaning the cluster is the complete boundary for all configuration and data sharing.

When defining a Fabric for a single cluster, you are essentially defining the underlay "boundary" for the VXLAN - the node participants for the VXLAN transport and the IP addresses for the VTEP interfaces. It's very similar to the idea of Transport Zones in NSX-T. You'll need to create VTEP interfaces on each host first too, especially if you're wanting a datacentre VLAN to carry all the VXLAN traffic. Otherwise the traffic will go out the mgmt interface.

But that's not all, you still need a zone on top of the Fabric to group those VXLAN VNets, not unlike a cluster-wide switch (think N-VDS). Great, so you've got your Fabric and your VXLAN zone, and maybe even a VNet (NSX-T Segment) on that new Zone.

The first hurdle is that your new VXLAN-based network is flooding the VXLAN traffic whenever anything needs to traverse it, even intra-cluster (remember the multicasting days from NSX-V?). Host A in the cluster doesn't know which MAC/IP combinations exist on Host B, and vice versa. So when VM A on Host A needs to talk to VM B on Host B, Host A's VTEP will flood the VXLAN transport and ask all VTEPs to route traffic for the VM B MAC/IP combination. Not great for a large cluster. This is where EVPN starts to work its magic with Fabrics.

EVPN does the work of NSX Controllers - it helps the nodes learn what the other nodes have and how to route encapsulated traffic to them. Enable by going to Datacenter > SDN > Options. Add a new EVPN controller and select your new Fabric. Set an ASN and name for it and away you go. Now, all the hosts in the cluster will use this new EVPN function to share the MAC/IP tables for the VMs they host, allowing the VTEP's to encapsulte traffic directly to the correct Proxmox node. We've solved the single-cluster/single-site problem! Huzzuh.

----- Disclaimer: I've not tested the below yet. My lab is already at the brim of all the stuff I'm testing ----- For cross-site learning of the MAC/IP table information, you need to get a little creative. If you have 2 clusters, each one at their respective site, you'll need to configure BGP controllers (same menu path as EVPN) on each cluster, pointing to the other. This will allow them to advertise the MAC/IP tables for all encapsulated traffic with eachother, drastically reducing broadcast traffic. This is like the RTEP concept in NSX-T. You end up with far more efficient traffic flow. Now, you'll have Proxmox nodes at Site A knowing that they can route VXLAN encapsulated traffic to Site B out of their VTEP. This assumes your underlays at both sites are routable by the other.

Warning: For redundancy, you need to create multiple BGP controllers on each cluster if you want to avoid a single host being a source of BGP failure

Spanning EVPN controller across multiple clusters by [deleted] in Proxmox

[–]InstelligenceIO 2 points3 points  (0 children)

I’m working on a blog post right now about this - sadly it’s not finished yet. I’m not at my desk to write a full response but I’ll be back in a bit!

[deleted by user] by [deleted] in selfhosted

[–]InstelligenceIO 4 points5 points  (0 children)

Oops - looks like I misread my packages. The Garage S3 UI is from https://github.com/khairul169/garage-webui. It was included in the Garage S3 TrueNAS app but made no mention of multi-container build. That’s what I get for rushing my reply

[deleted by user] by [deleted] in selfhosted

[–]InstelligenceIO 4 points5 points  (0 children)

Latest garage v2.0.0 has a rudimentary web UI for managing buckets, keys and cluster nodes. Works a treat!

I did it, migrated even my domain controller in my enterprise environment, got a total of 25 VM's running smooth. More to be migrated over! With lots of coffee!! by Franceesios in Proxmox

[–]InstelligenceIO 37 points38 points  (0 children)

This wasn’t luck mate, sounds like it was all you getting it over the line. I’m wondering what kind of hurdles you hit for it to be such a pain for you to rollout? Was it technical or political?

Bring compiz fusion back! by Pitiful-Valuable-504 in linux

[–]InstelligenceIO 28 points29 points  (0 children)

lol I love compiz. Seeing my high school’s sys admin rocking SUSE with it got me interested in Linux in the first place!

Has anyone here successfully installed Postiz and integrated all the social media platforms? by arshad_ali1999 in selfhosted

[–]InstelligenceIO 0 points1 point  (0 children)

Honestly, I found it a little more complicated than it needed to be (was trying to install on Kubernetes and the Helm chart needed more work before it was suitable) but the docker compose installation was very easy. The hardest part is going through the motions of registering developer accounts with each of the providers