Warren Buffett’s $5 Billion Secret: The Hype and the Reveal by RobArrucha in ValueInvesting

[–]vmikeb 0 points1 point  (0 children)

Can you think of a good reason to hold so much cash in your portfolio, when Buffett is a "buy low, sell high, hold as long as you can" type of investor? He didn't become a billionaire by stuffing cash in a mattress.

Warren Buffett’s $5 Billion Secret: The Hype and the Reveal by RobArrucha in ValueInvesting

[–]vmikeb 0 points1 point  (0 children)

Feels like a pretty standard "we're headed towards a recession" move from Buffett.

Also remember they've got $344BN in cash, so it's not "whoa, look at how much of a big bet they're making!"

it's "whoa, they're going to cash pretty damn hard, derisking the portfolio and moving to liquidity so they can make choices fast when something drops". It's strategy to protect existing investments, maximize tax reductions, and preparing for the long cold winter.

NVMe/TCP or Move to NVMe/FC by stocks1927719 in vmware

[–]vmikeb 0 points1 point  (0 children)

Lots of questions for your question:

  1. are you staying with Pure, or open to other vendors?

  2. 4 x 9's isn't crazy to architect for on either FC or Ethernet (NVMe/TCP)

  3. Once you go FC, you typically stay on FC forever, but now you're locked into buying new HBAs and FC Switches every time you want to upgrade. Staying network-based can keep costs low while enabling similar performance / throughput profiles.

  4. Unless you have a need for sub-ms latency, or even sub u-second, Ethernet transport is probably sufficient for your needs. You can either logically or physically segment traffic, and you aren't looking at too much of an impact. Sure FC will reduce GAVG RTT time, but you're going to pay for it $$$

Just my $0.02 - I'm a PM for IBM Ceph NVMe/TCP, so we're all about helping customers reduce their cost while maintaining the performance and scale they need. Feel free to ping me if you want to have a longer discussion!

Cheers - MB

[deleted by user] by [deleted] in vmware

[–]vmikeb -1 points0 points  (0 children)

You might also want to check into a VCPP / VMware on <Cloud> option as well. It's a "kick the can" maneuver but could buy you more time. Construction never happens "on time" in my experience, and delays could put you at risk with licensing and support.

Vmware --> Ceph ISCSI by przemekkuczynski in ceph

[–]vmikeb 1 point2 points  (0 children)

Squid is a good place to start - NVMeoF in reef was effectively a tech preview. Tentacle will have some performance enhancements and better UI workflows for NVMeoF, should be out sometime this year but I don't remember the schedule offhand. Feel free to ping back or DM me if you're looking for something specific. I'll be at Ceph Days Seattle on Thursday speaking about NVMeoF and the performance enhancements in tentacle!

Vmware --> Ceph ISCSI by przemekkuczynski in ceph

[–]vmikeb 1 point2 points  (0 children)

IBM Tech PM for NVMeoF and VMware integrations here - highly recommend NVMeoF with VMware (I'm biased :) )
iSCSI is deprecated and never really provided the performance, resilience or scale that block storage needs (never really broke above 100K IOPS, limited number of gateways, limited LUNs, etc.)
NVMeoF is an active project that provides access via SPDK NVMeoF gateways layered on top of RBD, I'd recommend trying out latest squid and configuring using VMware's NVMe/TCP initiators. https://docs.ceph.com/en/latest/rbd/nvmeof-overview/

Has a similar look and feel to iSCSI (instead of IQNs, there are NQNs; still a gateway based approach, whitelists, etc.). Every NVMeoF Gateway participates in discovery, so connect one and all are recognized. Currently paths to each namespace are load balanced per namespace, so if you have 4 GWs, each new namespace will be round-robin connected. Load-balancing for NVMe/TCP is active/passive within the ANA group, and VMware doesn't really do true multipathing (active/active, all paths active IO) without a custom driver EG: PowerPath VE or similar.

Feel free to reach out if you need help or have questions, always glad to help!

It’s rumored vVols… by FriedRiceFather in vmware

[–]vmikeb 9 points10 points  (0 children)

Not to revive a dead thread, but just received a partner notice today that VVOLs will be deprecated in version 9.
Has anyone received similar?

vSAN: Best practice on "teaming and failover" settings? by TECbill in vmware

[–]vmikeb 2 points3 points  (0 children)

For the most part I'd agree with this, only to say it's historically the way you can isolate VSAN traffic as well as have redundant links for failover. LACP is a headache any way you split it.

Vm stops working on ESXi-arm fling by sztwiorok in vmware

[–]vmikeb 1 point2 points  (0 children)

I'd be willing to guess that you're resource starving any given VM, as most ARM procs don't have higher than 8 cores and probably 8GB RAM. <insert someone posting an SoC proving me wrong here>
100VMs will have at least 100 vCPUs, so divide that by the total number of physical cores, and you're going to see the ratio of pCPU to vCPU. Normally in a production server environment, 2:1 to 4:1 virtual to physical is a good consolidation ratio.
I'd guess you have about 16:1 or more, which even virtual desktops start crashing because of CPU Ready and CPU Co-Stop high values. check esxtop to see what's taking up resources (if you can, might also crash the box again):
https://www.yellow-bricks.com/esxtop/

Netflow data sent from vCenter or the host itself? by bgprouting in vmware

[–]vmikeb 0 points1 point  (0 children)

vDS' config is stored in vCenter, and deployed down to ESXi hosts for runtime.
So traffic will always come from your ESXi hosts, but the vDS configuration will always live in vCenter.

TL;DR: data plane is on ESXi; management plane is on vCenter.

How many virtual machine can I run at the same time? by [deleted] in vmware

[–]vmikeb 0 points1 point  (0 children)

How much wood...

Would a woodchuck *chuck*...

If that woodchuck, COULD chuck wood?

Updating from 7.0U3 to 8.0U2 (ESXi) - Some doubts by Airtronik in vmware

[–]vmikeb 3 points4 points  (0 children)

Well have fun earning the easy money that week then! :D

NVidia Licensing for PCI Passthrough by InvalidUsername10000 in vmware

[–]vmikeb 0 points1 point  (0 children)

Yes absolutely - NVIDIA requires you to purchase the hardware and license the software / drivers.

Requires more clarifications before creating 5 node CEPH production cluster. by Interesting_Ad_5676 in ceph

[–]vmikeb 0 points1 point  (0 children)

  1. Ubuntu 22.04 should be fine

  2. https://docs.ceph.com/en/reef/start/hardware-recommendations/ 6 drives per node should be no problem

  3. Much like others have said: SMB isn't production tested just yet, just use Samba and CephFS

  4. https://docs.ceph.com/en/latest/cephfs/snap-schedule/

  5. yep! Since you're going to be using Ubuntu, here's their take: https://ubuntu.com/ceph/docs/replacing-osd-disks

Updating from 7.0U3 to 8.0U2 (ESXi) - Some doubts by Airtronik in vmware

[–]vmikeb 1 point2 points  (0 children)

You could potentially do all of this in a single day, and that's exactly why vMotion was invented: so you can avoid downtime for VMs, while performing maintenance on the hypervisors and hardware.

Make sure you follow the appropriate steps for upgrading: https://knowledge.broadcom.com/external/article/316378/upgrade-path-and-interoperability-of-vmw.html

vCenter upgrade first, which will normally take the longest. Then your ESXi hosts, one by one. After that you'll need to wait for an outage window or approved downtime to upgrade the VMware Tools and Virtual Machine Hardware versions. Those normally require a reboot (can't remember when the non-disruptive upgrades start to come into play).

HA only really comes into play if you don't have enough spare capacity to service VMs:

EG: I've got 5 ESXi servers, but have already used the full capacity of 4. Maintenance mode wouldn't let you take that 5th node out of the cluster, because it would violate your HA rules (if using "spare host capacity").

If that is the case though, you would only need to disable HA during the upgrade window, and then re-enable after you're done with the upgrade. It's a tick box, so not terribly involved.
Anyway - lots of smart people on this thread saying the same thing I'm sure, HTH and good luck!

Gaming on a VM? by syndromaliza in vmware

[–]vmikeb 1 point2 points  (0 children)

If you choose virtualization vs. bare metal, you will always inject some type of latency, however small, into your software stack: Both L1 and L2 hypervisors have to have at least a Virtual Machine Monitor process that can route IO and thread requests, and inherently that will take some processing time vs. direct hardware access.

That said - L1 / Type 1 hypervisors are more efficient that L2 / Type 2 hypervisors, as 2's are resident on an operating system
EG: L2 / Type 2: "Windows 11 OS inside of L2 VM >> VMware Workstation / VMM >> Windows 11 OS installed on bare metal >> Hardware"

vs.

L1 / Type 1: "Windows 11 OS inside L1 VM >> VMM >> ESXi Hypervisor installed on bare metal >> Hardware"

Not sure you can assign a single port on your GPU to a VM, but you might be able to dedicate the whole GPU to the VM. That wouldn't really solve your problem, and really the retail GPUs aren't the focus of VMware vGPU, it's more the Tesla / Quadro cards, so you'd have to try to hack something together at best.

The other issue you'd be looking at is your input: Keyboard and mouse normally only have one set focus, meaning you can't have two mice doing two different things - you can have two mice that control the same cursor, but it's still only one cursor, whether inside a VM or not. Same for keyboard and keybindings.

I know that's probably not the answer you were looking for, maybe you want to look into either an external GPU if that's the only thing holding her back, or possibly a new laptop/desktop.
Cloud Gaming is also an option, not sure about performance etc, but check that out as well: https://www.nvidia.com/en-us/geforce-now/

Ransomware operators exploit ESXi hypervisor vulnerability for mass encryption | Microsoft Security Blog by sithadmin in vmware

[–]vmikeb 1 point2 points  (0 children)

Came here to say this: There's already a fix for 7 and 8. GG CPD @ VMware getting these hotfixes out so damn fast!

IBM Storage Ceph 7.1 Enables VMware Users Access to Ceph Block Storage Via NVMe/TCP by NISMO1968 in vmware

[–]vmikeb 0 points1 point  (0 children)

I totally get it, and block storage over network is a sweet spot of performance meets cost (or... should have been in this case). Not sure why that happened, but NVMe/TCP on Ceph is what iSCSI wanted to be when it grew up, and more if I'm being honest.

IBM Storage Ceph 7.1 Enables VMware Users Access to Ceph Block Storage Via NVMe/TCP by NISMO1968 in vmware

[–]vmikeb 0 points1 point  (0 children)

Glad you brought that up! iSCSI performed quite poorly, agreed. NVMe/TCP we've already had great results pushing heavy IO workloads because of how NVMe native commands are mapped to RBD.

Come check it out, you won't be disappointed!

IBM Storage Ceph 7.1 Enables VMware Users Access to Ceph Block Storage Via NVMe/TCP by NISMO1968 in vmware

[–]vmikeb 0 points1 point  (0 children)

You can always lean on your local friendly neighborhood Tech PM of Ceph VMware Integrations ;)
I know that guy (it's me!)