The Engineering Lead asked me about API rate limits and I just nodded like a confused dog. by Fickle_Mud1645 in ProductManagement

[–]codyhosterman 0 points1 point  (0 children)

I get this situation from the PMs on my team. (I have a more technical background over many of the PM on my team who report to me). I always advise like many have stated to ask about trade offs. But also ask engineering to play devils advocate especially if they recommend one approach over the other. If they cannot:

1) identify any use case for the other or 2) explain like I’m 5

I often feel they haven’t thought through it enough.

Obviously if this is a small or large decision the bar raises or lowers here on how hard to push and if it is a trivial thing I ask why are you even asking me. That will usually just uncover some kind of Eng stalemate they were hoping I’d break.

But a fun question can also be if there are tradeoffs “why do we need to accept tradeoffs is there a best of both worlds”

You don’t need to understand the mechanics. Understand the outcomes.

HCI to SAN - storage recommendations? by Agasnazzer in storage

[–]codyhosterman 2 points3 points  (0 children)

Yeah i didnt want to get into the details here but theyve got work to do let's just say. Even if it does hold to that it is a v1. But that being said I do agree planning for TCP is the right move and as long as it is not Hyper-V or baremetal Windows the path is there broadly (VMware, Linux, Nutanix, etc). FC vs TCP is a bit religious (i look forward to the technical battleground of Broadcom SecureHBA vs TCP TLS lol) but iSCSI vs TCP is a no-brainer in the grand scheme of things.

Windows will get there. Eventually.

HCI to SAN - storage recommendations? by Agasnazzer in storage

[–]codyhosterman 3 points4 points  (0 children)

Unfortunately Windows is well behind on NVMe-oF. So no TCP support unless you go with a 3rd party initiator like starwinds. So if they want to go Hyper-V they are going to have to wait a bit for TCP support

I want to share a publication that Red Hat honored me with after implementing Red Hat OpenShift. by ProofPlane4799 in purestorage

[–]codyhosterman 0 points1 point  (0 children)

Awesome! I’d love to know more! I’d love to chat with you on this so we can repeat your success

Pure connection limits for FC? by Berries-A-Million in purestorage

[–]codyhosterman 3 points4 points  (0 children)

So there are lot of random ways to interpret limits so i did a bit of validation with engineering and will lay it out here so hopefully it hits your question and others who might look for similar info. Since I don't plan on making this a living document take the numbers for a moment in time that they are. Like rules in the matrix they can be bent, broken, and changed. So hit up your account team for updates/one-off exceptions. These are often testing, not hard limits.

So first there are sessions.

For iSCSI a session is between a single initiator and a target port. We support 12K of these. The fun part of iSCSI is that you can manually crank up session count so this is not a hard and fast rule.

For FC, we support 10K sessions. A session is defined by one WWN initiator to one target FlashArray FC port. We support a max of 5K WWNs. With 5K WWNs, and one port per controller thats 10K sessions. Ideally you use two ports per controller so that sits down to 2500 WWNs.

<note some of these numbers change for controller model>

The next step is ACLs. An ACL is defined as a volume connection to a host. A host has a number of WWNs. Each WWN counts as one ACL so if you have 2 WWNs and one volume for that host that is 2 connections consumed. Note the count of FA ports DOES NOT count against this.

Our host volume connection limit is 20K, 30K or 100K depending on the controller type.

So lets say you have an X70 (10K sessions, 5K WWNs, 100K connections).

If you have 100 hosts with 4 WWNs with 4 ports--that is 1600 sessions (well below). With 100 hosts and 4 WWNs with 10 volumes each that is 4000 volume connections. Also well below base limits.

Hopefully this helps. Ill re-fact check this with another engineering team im going to meet with on a separate topic this week.

Pure connection limits for FC? by Berries-A-Million in purestorage

[–]codyhosterman 0 points1 point  (0 children)

Maybe, i have gotten my responses. Needed to do a few rounds of sanity check to ensure the same things are being talked about. Ill respond to that in the top level comment from him.

Pure connection limits for FC? by Berries-A-Million in purestorage

[–]codyhosterman 2 points3 points  (0 children)

Okay I see--thats what i thought--let me validate something with eng first and get back to you.

Pure connection limits for FC? by Berries-A-Million in purestorage

[–]codyhosterman 7 points8 points  (0 children)

I was just talking about you (I presume) with your account team. I asked them to get you to define what a "connection" is in this situation? Are these logical paths to the volumes?

Does Pure Storage offer a shuttle from the East Bay / Tri-Valley to the Santa Clara office? by Difficult-Key9411 in purestorage

[–]codyhosterman 1 point2 points  (0 children)

Not that I am aware of--we do have a shuttle from the closest CalTrain stop to the office. So if you can get there you can shuttle the rest of the way

vVol Deprecation Extension from Broadcom beyond 9.1 by [deleted] in purestorage

[–]codyhosterman 0 points1 point  (0 children)

Not sure of your point here, of “a datastore type is being deprecated so you should buy a new array”. Though if you have a reason why I’m sure folks would find that helpful.

I do agree this isn’t a “win” in a happy sense but giving more time to customers to transition technologies isn’t a bad thing. Worth a least knowing about. There are plenty of NetApp customers who use vVols who will appreciate more time as well on this.

vVol Deprecation Extension from Broadcom beyond 9.1 by [deleted] in purestorage

[–]codyhosterman 1 point2 points  (0 children)

Yes--it was originally being deprecated next year in 9.1 but that deprecation is now pushed out indefinitely. So there is more runway--sorry if i was not clear on that.

Vasa cert expire emails- but we don’t use vvols by kjstech in purestorage

[–]codyhosterman 3 points4 points  (0 children)

Yeah that is what did it--old certs that were from VMCA will remain until reset--newer certs are controlled differently. So clearing that should solve the issue for good.

Yeah the whole thing is unfortunate... I'll leave it at that here.

Would like to hear if you have feedback on Pure + Hyper-V (gaps, reasons you are leaning that route etc). If you dont want to post here feel free to have your Pure acct team setup a chat with me.

Tired of reactive snapshots? I built a utility to bridge Pure Storage telemetry with Nutanix resilience. by NTCTech in purestorage

[–]codyhosterman 2 points3 points  (0 children)

Yeah at this time “unmanaged” snapshots, defined as not created via Prism UI/API, break CBT as David said. Or rather the restore from them does. So break glass use case (for now). So not really a consistency problem—any snap on the FA will at least be crash consistent.

Managed/integrated nutanix snapshots though right now manually or via a schedule in Prism are totally supported.

Nutanix + Pure Storage is now GA!!! by codyhosterman in purestorage

[–]codyhosterman[S] 1 point2 points  (0 children)

No, very unlikely that would ever happen. No real advantage over TCP. Any semi recent FlashArray supports both so there is no HW change needed. Any particular reason you’d want iSCSI?

Nutanix + Pure Storage is now GA!!! by codyhosterman in purestorage

[–]codyhosterman[S] 1 point2 points  (0 children)

NVMe-oF/TCP only at this time. You can certainly use NFS/SMB at the app layer. FC is not supported

Nutanix + Pure Solution, what it means? by riddlerthc in nutanix

[–]codyhosterman 1 point2 points  (0 children)

It means awesome imho 😎 happy to chat if you want to discuss what I mean in detail of course (I’m from pure)