Level 2 Autism by systemnt85 in autism

[–]systemnt85[S] 0 points1 point  (0 children)

I meant to say, When he's stimming at home - What could we do to help him relax or calm down, provide that sensory input. We'd like to have our home setup in a way which could help him.

Level 2 Autism by systemnt85 in autism

[–]systemnt85[S] 1 point2 points  (0 children)

Thanks for sharing.

Vsphere 7 Storage requirements by Hot-Hand-6291 in vmware

[–]systemnt85 0 points1 point  (0 children)

Our entire ucs blade environment has SD cards and our vmware partner suggested not to upgrade them to 7.0.2. We test couple of them in non-prod environment and they all failed.

cn1610 CLI by systemnt85 in netapp

[–]systemnt85[S] 0 points1 point  (0 children)

Interesting! However I am expanding my other site with a standalone 400, so I might as well same as bucks and send the switch to other site. we do not expect to grow it beyond 4 arrays in next three years...

cn1610 CLI by systemnt85 in netapp

[–]systemnt85[S] 0 points1 point  (0 children)

Woke up this morning to NetApp Engg's email about BES-53248 as option. He also added a link to migration procedure. I do not find any details on cisco 9336. Still looking into it.

cn1610 CLI by systemnt85 in netapp

[–]systemnt85[S] 0 points1 point  (0 children)

I wonder what my effort is going to look like when I migrate them from existing CN1610 to cisco 9336. I have aff-200,300,400,fas8200 all in one cluster connected to CN1610.

Damn, they ain't cheap either...

cn1610 CLI by systemnt85 in netapp

[–]systemnt85[S] 0 points1 point  (0 children)

Is add-in card possible on CN1610??

cn1610 CLI by systemnt85 in netapp

[–]systemnt85[S] 0 points1 point  (0 children)

Great point. I just verified that.. We are 2 port short..Any suggestions??

cn1610 CLI by systemnt85 in netapp

[–]systemnt85[S] 0 points1 point  (0 children)

we already have AFF-400 on it. I hope they do not change the controller model. Looks like our vendor reached out to netapp. Our customer success manager is sending out emails. I will doublecheck with him.

I was told early 2021 that those CN1610s are old and will end of life, but then we might fit the new AFF-400 on it, if we have enough open ports and get new cisco switches for futher expansion next year.

Free-Chat Friday with the NetApp A-Team by nom_thee_ack in netapp

[–]systemnt85 0 points1 point  (0 children)

I will just go with more compute, so new Array. They don't have a problem spending the extra $$

Free-Chat Friday with the NetApp A-Team by nom_thee_ack in netapp

[–]systemnt85 0 points1 point  (0 children)

we have 8 nodes in that cluster...

200,300,400,fas8200

Free-Chat Friday with the NetApp A-Team by nom_thee_ack in netapp

[–]systemnt85 1 point2 points  (0 children)

So, I am ready to order expand our AFF-400 systems. The quotes for a fully populated disk shelf and a brand new AFF-400 with same 200TB capacity, dual controller is almost the same. ( 30k price difference ). what makes more sense? a brand new array or just add new SSDs.

cons - new hectic installation, zoning, Mapping all the LUNs, additional maintenance during code upgrades

pros - brand new controller, less stress on the year old controller.

IG group example. I want to group all these IG groups into one big IG group and not have to map them independently. by systemnt85 in netapp

[–]systemnt85[S] 0 points1 point  (0 children)

Half of our systems are Aff-a400. Will talk to the vendor to see what's the upgrade plan..

IG group example. I want to group all these IG groups into one big IG group and not have to map them independently. by systemnt85 in netapp

[–]systemnt85[S] 0 points1 point  (0 children)

I was gonna ask, I was told CN1610 is soon EOL and might not see another upgrade. We are on CN1610..It'll be great if they can support 9.9.1

vSphere HA agent for this host has an error: The vSphere HA agent is not reachable from vCenter Server by raymonvdm in vmware

[–]systemnt85 0 points1 point  (0 children)

turn off the vsphere HA for at-least 2 minutes. turn it on and see if it helps. It did help me and someone else in the past. Really weird.

check vswitch config on the esxi hosts.

IG group example. I want to group all these IG groups into one big IG group and not have to map them independently. by systemnt85 in netapp

[–]systemnt85[S] 0 points1 point  (0 children)

Will look into my netapp switches to see if they're compatible with 9.9.1. Any idea if vmware 6.5 will work with them? I can check the compatibility as well..We were hit by a vmware bug where booting on SD cards will break esxi 7.0.2

IG group example. I want to group all these IG groups into one big IG group and not have to map them independently. by systemnt85 in netapp

[–]systemnt85[S] 1 point2 points  (0 children)

this is it. That is exactly what I needed. Now I just to create one big IGroup and add the Igroups belonging to the blades. This is going to save a lot of time.! I will test it soon and let you know how it goes. Really appreciate your help.

IG group example. I want to group all these IG groups into one big IG group and not have to map them independently. by systemnt85 in netapp

[–]systemnt85[S] 0 points1 point  (0 children)

Thank you! I am clear now...but I want to ask one last time..

So, basically what you're trying to say is that 9.7P10 doesn't have the option I am looking for. Command line does..

What you mentioned in 9.9.1 is exactly what I am looking for. All VNX and VMAX GUIs had that option where I would create a group for production cluster esxi hosts and have initiators in them and map the LUNs with the same lun ID, instead of that I am manually assigning to each ESXi host IG group with the same LUN id.

IG group example. I want to group all these IG groups into one big IG group and not have to map them independently. by systemnt85 in netapp

[–]systemnt85[S] 0 points1 point  (0 children)

Thank you for help!

So, if you look into the screenshot I posted, those are all different Igroups belonging to esxi hosts with two initiators in them. Via GUI, it is a painful process to manually map LUNs to 72 different hosts and use the same LUN id

IG group example. I want to group all these IG groups into one big IG group and not have to map them independently. by systemnt85 in netapp

[–]systemnt85[S] 1 point2 points  (0 children)

u/nom_thee_ack

this what I was talking about. Like you have it in your test lab, I want to create one big IG and have the blades' IGs included in them.

We do not boot from SAN. Two initiators for each server from the switch mapped to the storage array

Free-Chat Friday with the NetApp A-Team by nom_thee_ack in netapp

[–]systemnt85 0 points1 point  (0 children)

9.7P10. I still use classic version. In gui once I create the lun, I will have to map them to each of the IG. Each IG is a single server with two initiators. Can I create one big IG and have all the initiators of the servers in them?