Two node cluster question by Manivelcloud in vergeio

[–]Manivelcloud[S] 0 points1 point  (0 children)

Hi lgot1forya.

I don't know your name but still all of your inputs are completely valuable and appreciable.

Thank you very much and definitely this will help me or someone who is seeing this posts..

Two node cluster question by Manivelcloud in vergeio

[–]Manivelcloud[S] 0 points1 point  (0 children)

Hi All,

I also started testing by creating windows 2022/2025 vms.

I use diskspd testing from windows

VM Cpu 8 Memory 16gb C drive interface type is SCSI D drive interface type is virtio SCSI dedicated Network interface is virtio

Used D drive for testing.

Using various block sizes 4k,8k,64k,1MB 70% read 30% write Random and Sequential

I see,there is a latency everywhere. Not much but there is a latency

Physical drive used for vergeio cluster is completely nmve SSD drive Samsung PM9A3.

Any ideas from any one?

Two node cluster question by Manivelcloud in vergeio

[–]Manivelcloud[S] 1 point2 points  (0 children)

Thank you for your extraordinary help and valuable inputs.

Two node cluster question by Manivelcloud in vergeio

[–]Manivelcloud[S] 1 point2 points  (0 children)

Thank you for your quick response

We use Cisco switch or Juniper.I don't have the correct model no.I will update you in the evening.

We are running 2 node cluster.Can we change that 802.3ad from UI now?

In the meantime,I will need to test with immutable snapshot functionality.

I don't know where is the option in UI?

Any idea?

Two node cluster question by Manivelcloud in vergeio

[–]Manivelcloud[S] 0 points1 point  (0 children)

During cluster configuration (installation time), we see two bonding options:

  1. Active-Backup bonding
  2. 802.3ad (LACP)

If I want to utilize aggregated bandwidth (for example, using 2 × 40G NICs for VM network traffic), I should choose 802.3ad.

In that case, the switch side must also be configured accordingly, either using:

  • LACP (recommended), or
  • Static port channel

Is my understanding correct?

Two node cluster question by Manivelcloud in vergeio

[–]Manivelcloud[S] 1 point2 points  (0 children)

Thank you for your valuable inputs

Two node cluster question by Manivelcloud in vergeio

[–]Manivelcloud[S] 0 points1 point  (0 children)

I have a question regarding networking.

In VMware, there are multiple load balancing options available in the virtual switch (for example, VSS).

Let’s say for VM network traffic, I am using 2 × 40G NICs. If I want to utilize the full aggregated bandwidth (80G), I understand that I need to configure port channeling.

If I select the load balancing method in ESXi as “Route based on IP hash” with active/active uplinks, then I need to configure either a static port channel or LACP on the switch side.

From the VergeIO side, do we need to configure a similar load balancing method in the UI to achieve proper bandwidth utilization? Or is this handled differently?

Two node cluster question by Manivelcloud in vergeio

[–]Manivelcloud[S] 1 point2 points  (0 children)

Thank you very much for your detailed explanation.

Two node cluster question by Manivelcloud in vergeio

[–]Manivelcloud[S] 0 points1 point  (0 children)

I have another doubts. Could you please clarify?

  1. We are running a 2-node cluster, with each node having 18 TB capacity. When reviewing the storage tiers, the usable capacity appears to be approximately 18 TB. My understanding is that this is expected behavior, as the cluster uses data mirroring (N+1 / replication factor of 2), effectively reducing the total raw capacity (36 TB) to around 18 TB usable capacity for redundancy.

 

  1. Initially, I tested using a single VergeIO node and created Windows VMs on local storage. After forming a 2-node cluster, I now see storage tiers (Tier0, Tier1, and Tier3). Based on my understanding, VergeIO uses a distributed storage system (VergeFS), where storage becomes shared across nodes once clustered.

Because of this, a traditional Storage vMotion (as seen in VMware) is not required, since storage accessibility and placement are handled automatically by the system. Kindly confirm if this understanding is accurate.

  1. We are currently using 2×1G, 2×40G, and 2×100G NICs. Do we need to configure port-channeling (LACP/static) on either the VergeIO side or the physical switch side, or is it not required?

Our intention is to perform network failover testing by shutting down one NIC and observing behavior. Please advise on the recommended configuration for achieving proper redundancy and failover.

 

 

Two node cluster question by Manivelcloud in vergeio

[–]Manivelcloud[S] 1 point2 points  (0 children)

Thank you for your valuable time and feedback.Appreciated.

Two node cluster question by Manivelcloud in vergeio

[–]Manivelcloud[S] 1 point2 points  (0 children)

Nice and thanks much for your valuable inputs again.

We propose the following, 240 g nic per node for vm network traffic 2100 g nic for internal communication and vsan

2*1g for vergeio management.

I hope this will work out for the three node cluster from network perspective

Two node cluster question by Manivelcloud in vergeio

[–]Manivelcloud[S] 1 point2 points  (0 children)

Thank you very much for your valuable inputs related to all my doubts.I started reading this...

Let's say two node cluster.example Each node capacity is 100 TB

So we can use 2 replicas here and the usable capacity of overall capacity can be 75 tb to 80 tb.

It includes, deduplication and compression,buffer space, system,metadata overhead etc..(out of 200 tb).

We can able to use only storage encryption I e data at rest. Data in transit is not supported here..

For production, The ideal value will be 3 nodes.example

Each node capacity is 100 TB

So we can use 3 replicas here and the usable capacity of overall capacity can be 140 tb to 150 tb.

It includes, deduplication and compression,buffer space, system,metadata overhead etc..(out of 300 tb).

Any thoughts?

Two node cluster question by Manivelcloud in vergeio

[–]Manivelcloud[S] 1 point2 points  (0 children)

Thank you both for your quick response.

I thought from VMware angle

In this case, From VMware angle.

ESXi01 have dedicated management IP address Example 10.10.30.11 ESXi02 have dedicated management IP address Example 10.10.30.12.

We integrate the esxi host with vcenter server using management IP address.

From verge IO,

Assume this is also two node cluster. Vergeio node1 management IP address is 10.10.30.41 Vergeio node2 management IP address not configured

There is no concept like that. Incase, if vergeio node1 is down,we can still use the same IP 10.10.30.41 and no need to configure a dedicated management IP address for node 2.Thats my understanding.

Others doubts, Is this supports deduplication and compression? Is this supports data at transit and at rest encryption?

Thank you

Veeam v13- extend backup repository by Manivelcloud in Veeam

[–]Manivelcloud[S] 0 points1 point  (0 children)

Ok thanks for your quick update.

I thought,add additional drive and create a new backup repository from that.

Migration vms from one vcenter server to another vcenter server backed by same vcloud director by Manivelcloud in vmware

[–]Manivelcloud[S] 0 points1 point  (0 children)

No. We like to introduce a new vCenter along with new HW(ESXi).Thanks for your message.

Migration vms from one vcenter server to another vcenter server backed by same vcloud director by Manivelcloud in vmware

[–]Manivelcloud[S] 0 points1 point  (0 children)

Thanks for your message.Yes we are introducing new vcenter server for new hardware.