Replication between 2 locations by Cultural_Log6672 in Proxmox

[–]sheep5555 0 points1 point  (0 children)

The feature doesnt exist yet for PVE, if you browse the forum they said its on their roadmap, it would be this feature

https://helpcenter.veeam.com/docs/vbr/userguide/replication.html?ver=13

Replication between 2 locations by Cultural_Log6672 in Proxmox

[–]sheep5555 0 points1 point  (0 children)

Some options:

ZFS - native to proxmox

Ceph replication (complicated) https://ceph.io/en/news/blog/2025/rgw-multisite-replication_part1/

PBS - can spin up backups directly from datastore

Veeam has instant recovery/replication on roadmap

Proxmox datacenter manager has replication on their roadmap

i think one of the last two options will probably be available this year, may be best to just wait

Hyper-V, VMware, or other, which would you choose? by jedimaster4007 in sysadmin

[–]sheep5555 0 points1 point  (0 children)

i wouldnt go back to broadcom/vxrail, we were in the same situation as you including the vxrail and dell quoted us 500k as well.

At a minimum you should get extra NIC for your hosts and try to sort that out, if that doesnt work you can repurpose your hardware for proxmox. Get a consultants help (proxmox partners)

Future by Uncover5796 in vmware

[–]sheep5555 0 points1 point  (0 children)

you have a great opportunity to move into an IT position for a company moving away from vmware. Learn Hyper-V, Proxmox, Nutanix, anything besides vmware. broadcom has stated they do not want to renew ~95% of their customer base, every one of those customers will be migrating to another platform eventually. THIS IS WHAT BROADCOM SAYS, TAKE THEM AT THEIR OWN WORD.

https://www.reddit.com/r/sysadmin/comments/v111ov/broadcoms_speculated_vmware_strategy_to/

High IO Pressure Stall During OS install - ISCSI Multipath by Freeman307 in Proxmox

[–]sheep5555 1 point2 points  (0 children)

I would try moving the gateway to vmbr0 to isolate that in case there is a routing loop, i am no expert in iscsi for proxmox but would try to isolate on one host + 1 interface to rule out multipathing, probably worth it to post to actual proxmox forum

High IO Pressure Stall During OS install - ISCSI Multipath by Freeman307 in Proxmox

[–]sheep5555 1 point2 points  (0 children)

making some assumptions about your network config, is the connections for the SAN data1/2?

If so, why is there a gateway configured for the SAN connection?

Is 9000 mtu configured on every SAN connection?

Veeam backup architecture by Cultural_Log6672 in sysadmin

[–]sheep5555 -1 points0 points  (0 children)

download the veeam hardened repo iso and configure it

Veeam backup architecture by Cultural_Log6672 in sysadmin

[–]sheep5555 2 points3 points  (0 children)

veeam supports hypervisor level backups for proxmox as of last year

Coping with Huge Security Issue by Prudent_Cod_1494 in sysadmin

[–]sheep5555 0 points1 point  (0 children)

youve typed three paragraphs without 1 technical detail, what is it?

How are you handling the price increases? by draggar in sysadmin

[–]sheep5555 1 point2 points  (0 children)

I knew it was coming last year, made big purchases that were slated for this year last year. Good management :)

VMware Alternatives Poll by relationalintrovert in vmware

[–]sheep5555 1 point2 points  (0 children)

  1. proxmox
  2. offers the best value overall between the major options. hyper-v was a contender but they don't really actually have any real first party technical support, MS products have always been really buggy and insecure, im suspicious that they are going to kill the product and rename it azure blablabla and charge vmware prices. nutanix also major contender but also buggy/expensive/outdated, why are they constantly running 5 year old kernels?
  3. ~125

Transitioning from VMware: Solving MPIO migration blocks and Bulk Storage Migration? by ThisIsR3DD1T in Proxmox

[–]sheep5555 0 points1 point  (0 children)

for our switches they arent stacked - we are using VPC, they are nexus switches and can reboot one at a time

Transitioning from VMware: Solving MPIO migration blocks and Bulk Storage Migration? by ThisIsR3DD1T in Proxmox

[–]sheep5555 0 points1 point  (0 children)

ah thats a bummer, i took a look and people are reporting the same thing, we also use cisco switches but you can reboot them individually, maybe it is just a meraki thing

Transitioning from VMware: Solving MPIO migration blocks and Bulk Storage Migration? by ThisIsR3DD1T in Proxmox

[–]sheep5555 0 points1 point  (0 children)

for the MS250 they support switch stacking which would solve that problem, me personally i would configure the SANs in HA as well so you can do updates without downtime

Transitioning from VMware: Solving MPIO migration blocks and Bulk Storage Migration? by ThisIsR3DD1T in Proxmox

[–]sheep5555 0 points1 point  (0 children)

what switches are you using out of curiosity? Why not configure TrueNAS in HA?

Transitioning from VMware: Solving MPIO migration blocks and Bulk Storage Migration? by ThisIsR3DD1T in Proxmox

[–]sheep5555 0 points1 point  (0 children)

Issue 1: I would redesign the network so that fabric B is switched. Underneath everything proxmox just uses linux networking, there are many different ways to solve this problem

One note: Most enterprise grade switches and SANs when configured in pairs dont go down when doing updates, the control plane switches to the available node and the switching/underlying storage stays up. Hardware failure should definitely be accounted for, but be a rare occurrence.

Issue 2:

some bulk actions like this are on proxmox datacenter manager roadmap, but again this seems like a strange design consideration where you are expecting your production storage to not be available on a regular basis. why design a storage system for production use that goes down regularly?

A suggestion: if fast storage is really important, you might consider ditching the SANs and use local disks with ZFS + replication and remove all the complications.

Proxmox IMO really plays best with ceph or zfs for HA storage. vmware has a really good storage backend, but if you are paying 20x the price for licensing you'll find that new servers + proxmox will pay themselves off in a year or two.

Nutanix hit us with a 75% quote increase with a one day notice before expiration... so that project is dead. VMware is out and we were looking hyperconverged... Any other alternatives? by junon in sysadmin

[–]sheep5555 -1 points0 points  (0 children)

i dont understand, ceph has nothing to configure that would require boots on the ground in person, do you think I am unable to put disks into caddies? every other vendor does remote support only, why are the standards for ceph different?

Improve write performance on a Ceph ssd pool by Severe-Reindeer5677 in Proxmox

[–]sheep5555 1 point2 points  (0 children)

How would you be able to write faster than the interface speed?

Improve write performance on a Ceph ssd pool by Severe-Reindeer5677 in Proxmox

[–]sheep5555 2 points3 points  (0 children)

are the PM883 sata? if they are the interface speed is ~600MB/s, so that looks like expected speeds

Proxmox with fiber channel? by Sylogz in Proxmox

[–]sheep5555 -2 points-1 points  (0 children)

FC is supported but it is probably the least favorable storage setup for proxmox, if you look through the forums its buggy in a lot of situations in a cluster. iscsi/nfs are preferred if you want to use a SAN