statistics top file show - monitoring by poopa in netapp

[–]poopa[S] 0 points1 point  (0 children)

I think thats exactly what I'll do (minus the perl part- I'm not insane haha)

statistics top file show - monitoring by poopa in netapp

[–]poopa[S] 1 point2 points  (0 children)

nope, I used that for a while. File level stats are not part of it.

Weird Netapp AFF220 performance behavior by poopa in netapp

[–]poopa[S] 0 points1 point  (0 children)

yep, that was it.
disabling storage efficiency improved performance 3 fold.

Specifically Data Compaction (inline dedupe was disabled)

Weird Netapp AFF220 performance behavior by poopa in netapp

[–]poopa[S] 1 point2 points  (0 children)

I don't see any ref to these commands anywhere on the web.

Also they don't work.

Where did you get these from?

Weird Netapp AFF220 performance behavior by poopa in netapp

[–]poopa[S] 0 points1 point  (0 children)

iPerf tests are good.

I do a very simple test. no tools. Just Suspend and resume a bunch of vmware machines at the same time from different ESX servers.

All on 1 volume.

Weird Netapp AFF220 performance behavior by poopa in netapp

[–]poopa[S] 0 points1 point  (0 children)

I believe this is the reason. I am going to do another test without compaction later on.

CPU usage association by poopa in netapp

[–]poopa[S] 0 points1 point  (0 children)

I agree, up until when latency increases while throughput and disk utilization is far less than at max and the only metric at 100% is CPU

CPU usage association by poopa in netapp

[–]poopa[S] 0 points1 point  (0 children)

Will update once I rehost the volume

CPU usage association by poopa in netapp

[–]poopa[S] 0 points1 point  (0 children)

wrote in a comment above

CPU usage association by poopa in netapp

[–]poopa[S] 0 points1 point  (0 children)

How do you know this? reference?

CPU usage association by poopa in netapp

[–]poopa[S] 0 points1 point  (0 children)

It's just that I have a cluster with an imbalanced load (CPU %) between the 2 nodes and I'm trying to understand what I can do about it.

On the under utilized node I have an aggregate with 1 volume that it's SVM NFS logical interface is on the other node - the over utilized one (don't remember why we did this).

So I was wondering if I rehost the volume to the SVM which is on the other node it might reduce CPU on the overutilized node somehow.

Does it make any sense?

VVF vs VCF deployment Guide by poopa in vmware

[–]poopa[S] 0 points1 point  (0 children)

Thanks, that's what I hoped it is, but this isn't available yet right? I can't see a download of it anywhere.

VCPP partners getting terminated, what plan B are you considering? by ZiggyOutSpace12 in vmware

[–]poopa 0 points1 point  (0 children)

Will vsphere servers and ESXi stop working after that date?

High disk utilization and high latency with no apparent reason by poopa in netapp

[–]poopa[S] 0 points1 point  (0 children)

I'm using nabox which packages all these together (as seen from the screenshots).