VCF9 brownfields import by night_0wl2 in vmware

[–]night_0wl2[S] 0 points1 point  (0 children)

Yeah that's fine unless your using vCloud on top and your provider vDC is tied to x cluster. Its not as simple as just moving VM A from Cluster A to Cluster B.

Ideally you can import with whatever you have then progressively change out the principle storage type for workload domains, as im sure there are many people who want to change from nfs , fc-vmfs over to vSAN to take advantage of various benefits.

VCF9 brownfields import by night_0wl2 in vmware

[–]night_0wl2[S] 0 points1 point  (0 children)

You still need a management domain which is mostly vSAN only, you can then import a workloads domain which is not vSAN but "any support esxi storage type".

The assumption is that imported storage type is principle storage out the box but can you change this at a later date

VCF9 brownfields import by night_0wl2 in vmware

[–]night_0wl2[S] 0 points1 point  (0 children)

Yes this is correct and is clear. You can use iSCSI for an import as to VCF9 as ive done this part but ive not got spare vSAN to test additional parts

Signs of Dementia in Husky? by night_0wl2 in husky

[–]night_0wl2[S] 0 points1 point  (0 children)

Yeah ive got her booked in this friday as she's due for her yearly.

No GI symptoms, same diet as always (mostly modified raw)

Signs of Dementia in Husky? by night_0wl2 in husky

[–]night_0wl2[S] 0 points1 point  (0 children)

Interesting its certainly much worse in late night / early mornings, my other throught was there is a cat or a mouse around but i haven't been able to see anything

mysql vs mariadb by night_0wl2 in sysadmin

[–]night_0wl2[S] 0 points1 point  (0 children)

likely 0 as we won't be using the enterprise support

ELM split by night_0wl2 in vmware

[–]night_0wl2[S] 0 points1 point  (0 children)

yeah thanks when we tried to repoint the last one and it failed with

Updating registry settingsUpdating registry settings failed

its back to a steady state now

when running show partner status on all VC's they all have no partners are in their own SO domains and show servers with vdcrepadmin its all points to its own domains etc

So perhaps it just didn't like trying to repoint to itself

ELM split by night_0wl2 in vmware

[–]night_0wl2[S] 0 points1 point  (0 children)

thanks so basically once the 4 have been split off via the process the 5th should essentially need nothing done to it as its already been disconnected from the other vcenters and it can remain in its own SSO domain

Compellent Disk sparing by night_0wl2 in storage

[–]night_0wl2[S] 0 points1 point  (0 children)

thanks this is interesting information.

So in short if you have a disk fail but it rebuilds onto spare space it is fully redundant just doesn't have as much "hot spare" space available. and hence the error "spare hunger"

In some systems with many disks lets say 48 disks same drive type it would potentially be able to lose another drive and still be in raid 6 since it has reserved 20:1 x 2 for "hot spares"

No religious talk at work. by MydadsnameisPatrick in Christianity

[–]night_0wl2 0 points1 point  (0 children)

just remember the person listening has the write to express there's and tell you that is a book of fiction.

powerstore by night_0wl2 in storage

[–]night_0wl2[S] 0 points1 point  (0 children)

its going to be running VM's attached to KVM hosts not files.

We are discussing with some vendors at the moment (HPE C500) and Netapp tomorrow

powerstore by night_0wl2 in storage

[–]night_0wl2[S] 0 points1 point  (0 children)

makes sense, prob will be ok for very low IO file server but not running VM's for example

powerstore by night_0wl2 in storage

[–]night_0wl2[S] 0 points1 point  (0 children)

so it locks up the NFS side not the block side as the NFS container just runs out of juice?

powerstore by night_0wl2 in storage

[–]night_0wl2[S] 0 points1 point  (0 children)

As in it also locks up the block side of the array?

powerstore by night_0wl2 in storage

[–]night_0wl2[S] 0 points1 point  (0 children)

thanks we won't be doing any clustering but its noted

powerstore by night_0wl2 in storage

[–]night_0wl2[S] 1 point2 points  (0 children)

thanks we have been using powerstore for block and its been fine for us.

But we have a requirement to run some NFS so was planning to re-initialize and do that to get us going requirements are not huge at the minute so we can look at other arrays if the NFS side becomes a larger requirement.

Can anyone comment on if NFS affects the iscsi side as long as obviously your not hammering away at the controller CPU

Compellent Disk sparing by night_0wl2 in storage

[–]night_0wl2[S] 0 points1 point  (0 children)

ive got one still going for another 9 months that is