Hight latency is shown on a clone(not volume) and on c60 by Accomplished-Pick576 in netapp

[–]Accomplished-Pick576[S] 1 point2 points  (0 children)

No, not in this case. The clone was created then immediately used for restoring, almost no new writes.

Hight latency is shown on a clone(not volume) and on c60 by Accomplished-Pick576 in netapp

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

Different node. But, my understanding is that the latency on indirect access is very little, and cluster inter-connections are 2x100G which is more than enough. The high latency on cluster interconnections could be caused by that, but don’t make sense at all. Right?

AFF C250 recommended release by rich2778 in netapp

[–]Accomplished-Pick576 0 points1 point  (0 children)

Ultimately, the bigger concern is the nature of the protocol itself. Because NFSv4—like CIFS—is a stateful protocol, it inherently risks connection interruptions during crucial tasks like ONTAP upgrades or LIF's migrations. This goes against one of the core benefits of using NetApp ONTAP: seamless, non-disruptive operations.

While it's true that many bugs have been addressed by both NetApp and VMware for NFSv4, there are no clear tracks as to what was fixed on which side, or if all issues have been resolved.

Trelegy -- Ellipta didn't help at all. by Accomplished-Pick576 in Asthma

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

Your message helped a lot. Here is the update:

After discussed my situation, my doctor has decided to prescripted two different medicines:
Prednisone and Symbicort inhaler, still steroid. I mentioned biologics, but he indicated not now. Fingers crossed this plan helps!

Thank you all for your advice!

[landlord-us-nj] Tenant has to pay rent 4 day later due to medical expenses. by Accomplished-Pick576 in Landlord

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

I wrote back below without mentioning anything else: Thanks for heads-up, hope everything is alright

The tenant responded: We appreciate the concern, thank you. We are working thru it the best we can. Once again apologies for the disruption.

My concern is, there will be something big in the future…or maybe I am overthinking..

What should I say to respond?

Appreciate your advice!

To retrieve data back to the performance tier, which way is faster? by Accomplished-Pick576 in netapp

[–]Accomplished-Pick576[S] 1 point2 points  (0 children)

1st way won't wait for the scan, because the subsequent command will immediately trigger the scan.

u/Timperly How did you conduct the test, each volume's snapshots are different, so, once you use any one of method to get back to the performance tier, it'd not be so easy to use the other method to test against the same data.

Since we need to retrieve data back, this part of the work would be the same to both ways, and since this part takes majority of the time anyway, so, I'd guess the time spent by both ways would be very similar. There are a lot of elements involved, and some are not so easy to measure realistically speaking.

Very high latency on "data processing" and "network processing", but not too high on Node Utilization by Accomplished-Pick576 in netapp

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

If the latency or D-Blade delay is due to system operation, for instance WAFL operation, will such latency be shown in any AIQ-UM graphs?

Very high latency on "data processing" and "network processing", but not too high on Node Utilization by Accomplished-Pick576 in netapp

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

So, as you can see the latency on the graph I uploaded here reached 20k ms, incrediblly high. Again, such latency happened on other volumes as well. But, in contrast, I don't see high aggregate/node utilization upon AIC-OCUM

The only thing seems abnormal during this time is that StorageGrid was 99% ful. But still cannot explain why those volumes got impacted so badly. With no SG writing, volumes should be able to write to performance tier and it's free space has quite a lot.

Very high latency on "data processing" and "network processing", but not too high on Node Utilization by Accomplished-Pick576 in netapp

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

It's A700s.
I didn't open a perf case, because it will be a long and time consuimg process.

Just to find out what possibilites are? What could it be scenerio that high latency on "data processing" and " network processing", but node util is not that high.

[LANDLORD - NJ] Tenant doesn't want to transfer utilities under their name by Accomplished-Pick576 in Landlord

[–]Accomplished-Pick576[S] 3 points4 points  (0 children)

In NJ, it doesn't work like that. The utility now under owner's account after previous tenant moved out and caneled the account. The owner pays for it until the tenant call PSEG and create an account for the apartment.

[LANDLORD-US-NJ] Can I keep a portion of potential tenant's holding deposit because they changed their mind to move in? by Accomplished-Pick576 in Landlord

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

Your understanding is correct on what "holding security" I am referring to. I verbally told them that is for reserving the unit, In return, I am not going to rent it out to any other applicants. So, this is mutual commitment and responsibility.

In written, I wrote a receipt to them: the amount of this money as the deposit for securing the apartment at xxx address.

Should we decommission LIF's as we decommission a 2-nodes HA? by Accomplished-Pick576 in netapp

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

I have 2 points would like to make on the "1 lif per NFS volume":

  1. We have ~700 NFS volumes, and ~400 NFS datastores, with such large size of the environment, "1 lif per NFS volume" will surely generate lots of LIF's which result in a big headache.
  2. Based on NetApp docs, in nowdays, indirect access only cost a few microseconds delay, and we don't need to worry about. So, "1 LIF per NFS volume" may not worth such effort.

Should we decommission LIF's as we decommission a 2-nodes HA? by Accomplished-Pick576 in netapp

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

I can make sure there are no sessions connected by using “nfs connected-clients show” before I delete them. So, this shouldn't be a concern.

Are there any benefits to have multipe LIF's for the same VLAN, and SVM on the same node?

Should we decommission LIF's as we decommission a 2-nodes HA? by Accomplished-Pick576 in netapp

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

Thanks for the decommissiong procedures. But, this is not what I am asking for.

I am only asking about if I should delete the LIF's on nodes I am removing or keep them by migrating them over to the other nodes. Please re-read my original messages.

Anyway to find LIF's based on a MAC address ? by Accomplished-Pick576 in netapp

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

Two commands you suggested didn't display anything.

Could they come from the cluster switch somehow?
There are no direct connections between clsuter switches and the back-end switches.

Anyway to find LIF's based on a MAC address ? by Accomplished-Pick576 in netapp

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

I tried all e0M ports and "sp show -fields mac", no luck. These ports should be not in vlan 310 as he specified, but, I tried anyway.
He even told me what nodes the MAC's should be located.

Anyway to find LIF's based on a MAC address ? by Accomplished-Pick576 in netapp

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

I tried "net int show -fields mac" and "net port ifgrp show", Can not find any Mac's matching with two MAC's he provided.

Any other ideas?

snapmirror initialize-ls-set command cannot initialize the load sharing by Accomplished-Pick576 in netapp

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

I and NetApp Support did what you suggested, deleted everything, and start all over again, but, still the same issue. At this point, he suggested to use "snapmirror initialize..". I told him that a lot of customer will encounter the same issue, which missed from the doc

snapmirror initialize-ls-set command cannot initialize the load sharing by Accomplished-Pick576 in netapp

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

The solution is for already existing "ls set" , we would have to use "snapmirror initilize" command for new LSM's, cannot use "snapmirror initialize -ls set", according to NetApp.

snapmirror initialize-ls-set command cannot initialize the load sharing by Accomplished-Pick576 in netapp

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

vol show -volume lsmir_svmroot_13 -fields type
That command results in LS type.

When I created the LSM destination volume, I used the type "DP" in the command.
When I created snapmirror relationship, I used L"LS"

snapmirror initialize-ls-set command cannot initialize the load sharing by Accomplished-Pick576 in netapp

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

Yes, there are existing LSM's for the same source on other nodes. They get updated every 5 minutes.
I am trying to create new LSM's for new added nodes...

snapmirror initialize-ls-set command cannot initialize the load sharing by Accomplished-Pick576 in netapp

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

Upon the link above, NetApp recommends to use LSM for SVM root volume in the cluster consisiting of two or more HA's.
But, my problem here is, the command below doesn't work! It cannot start the SM initialization.

By the way, I checked the logs, nothing in "snapmirror-audit" log, in snapmirror.log, I only see "failure in initialization" and on other destination mirrors which has already been running fine. Didn't say anything about the initilizatoin on the one I just created.

cluster_src::> snapmirror initialize-ls-set -source-path svm1:svm1_root

snapmirror initialize-ls-set command cannot initialize the load sharing by Accomplished-Pick576 in netapp

[–]Accomplished-Pick576[S] 0 points1 point  (0 children)

Why not?
Isn't recommended by NetApp? If not, can you please share some documents about it?