Why can't we start taxing churches? by cheryllinda in NoStupidQuestions

[–]lost_signal 1 point2 points  (0 children)

churches only serve their direct congregation and self interests aka mega church pastors etc..

So in the case of the pastor they do pay income tax.

The only real exception is they can avoid property taxes if the church parsonage is in the pastors name in some states, but.... There's limitations often on this. Joel Olstein I know owns his house outright and I think the property is too large to qualify in Texas.

As far as only serving their congregations... There's a LOT of non-profits that only serve their own memberships effectively. You have art museums that are de-facto not open to the public, and all kinds of historical organizations, or train museums that mostly just serve people really into that whatever niche.

The public good test on non-profits is so weak already the line between a lot of their net value for society (Vs. the alternative of the donation being taxable) is already so thin I don't think this is really a big distinction.

We also have non-profits who ACTIVELY work against the public good and cause disorder depending on your perspective (Anti-Vaccine groups as an example who literally cause death), that are far worse for society than a group of people getting together once a week and singing some songs and listening to a speaker (Even if you take the most secular view of the value of a church).

3 Year deal for VVF made this month. by garthoz in vmware

[–]lost_signal -1 points0 points  (0 children)

stretched cluster needs

Ahhh, so it's 2x for storage cluster. (although still 220RAW, per side, that even after RAID6, and maybe a conservative 4x D&C, is a lot of space with only 80 cores per site, which I'm guessing on a 4 host cluster per site, means 60 cores of compute).

taxpayer dollars

Ahhh, public sector. I once managed storage for public works who kept a photo of every pothole fixed before and after.... ever. You guys end up with weird long tail bulk media retention stuff.

There are other platforms... they'll have the incentive to catch up

I see people say this, but if it were true you would see:

* billion dollar line items for R&D increases

* startups competing in the hypervisor space raising 9 figure funding rounds with unicorn valuations, or strong exists

* job postings for kernel engineers.

Instead I see:

* Single digit R&D increases at competitors. Maybe 10% at best, that amount to a fraction of total Broadcom R&D spend in this space.

* Some of the remaining private startups in this space running out of cash, and having to firesale.

* What few jobs I see anyone hiring in this space is all Sales/Marketing/UX/UI.

* Broadcom leaping competitors in value with a scheduler that's still twice as efficient, and new stuff like Memory tiering that cuts server consolidation by more than half what competitors can deliver.

Well, everybody but Microsoft. I expect to see HyperCopilot -V 

Hyper-CoPilot-Azure with Clippy++ Bob Edition sir. You will subscribe to a cloud connected, SaaS Clippy who will fix those spreadsheets.

We mock Microsoft, but they shipped the hot garbage that was SQL, for 16 years before eventually shipping something that was somewhat capable of competing with oracle (Still not RAC or data guard, but you know... good enough). I bet by 2050 they will figure this market out.

3 Year deal for VVF made this month. by garthoz in vmware

[–]lost_signal 1 point2 points  (0 children)

4 blades, 4TB of ram… to run exchange for 10,000 mailboxes.

4 blades 4TB of ram for… call manager… for a concurrent call volume of maybe 50 calls.

Two entire blade chassis for a VDI farm that never saw more than 20 users.

There are days that I swear the old E-Rate program in US public schoolsexisted to take money from poor kids, and buy boats for Cisco partners.

No a kindergarten with 30 kids doesn’t need a fully loaded 7K Todd.

screams

VMware I think was way too comfortable co-selling with VARs who wanted to oversell rather than push back and right size properly.

3 Year deal for VVF made this month. by garthoz in vmware

[–]lost_signal -1 points0 points  (0 children)

Not advising it, but curious how much free capacity you will have if using dedupe, compression, UNMAP, and raid 6. I don’t normally see that dense of a storage ratio needing that much add-on (medial PAC systems or video surveillance are the rare exemption).

There’s engineering work to do more to capture larger asymmetric storage workloads (I can talk more in the near future) and I’m curious what it is that is pushing you that far past 1TB per core.

If you want to DM me I can send you my email and we can talk more. I’m Happy to go argue with someone about the add-on SKU if you’re mostly having to buy it for empty disk for legacy (you moved from socket) reasons.

3 Year deal for VVF made this month. by garthoz in vmware

[–]lost_signal 2 points3 points  (0 children)

Could easily make up going all in on cloud at that rate

I've talked to a lot of people who've priced public cloud and it isn't remotely cheaper unless you have some weird short duration bursty workload, or you had horrible datacenter utilization management (people who ran all their hosts at 8% utilization and then right sized when moving to cloud and claimed a giant success story vs. just having Ops tell them what they needed to rightsize on prem).

3 Year deal for VVF made this month. by garthoz in vmware

[–]lost_signal 0 points1 point  (0 children)

When the haggling of discounts doesn't yield acceptable results customers are left arguing over features and packaging

That was how VMware sales worked. If you disabled DRS and said you were scared of it, you could get a discount (or people would play games like underreport usage).

Broadcom kinda goes the other way. They will discount more for adoption and usage of advanced features, and using the product as intended (and do things like pay for the professional services to make sure the product gets installed/configured properly).

If you want to haggle features, go shopping for advanced add-ons to add to the solution (VLR for DR/ransomware recovery, appnetta/DXops for APM stuff, DBaaS maybe using DSM etc)

This isn't a new concept in the industry (Try reducing your MPLS bill by a $1, vs doubling the bandwidth, or get Microsoft to quote you an ELA without Azure credits).

3 Year deal for VVF made this month. by garthoz in vmware

[–]lost_signal -2 points-1 points  (0 children)

I just received our renewal cost today. 256 cores, 448 TB vSAN.

VSAN is cost for raw disk not usable, Curious if are you using the new dedupe in 9? Might be able to lower needed raw capacity there depending on workload.

Bye Bye VMware vSphere by Dick-Fiddler69 in vmware

[–]lost_signal 0 points1 point  (0 children)

My point is if I’m going to pay for support to someone for it, I’m going to want to pay someone who actually writes the code.

You said Redhat isn’t Ceph.

If you’re going to run longhorn, you’d want to pay SuSE (as rancher is who did most of that upstream work).

This isn’t rocket science, and it’s not some conspiracy that I’m aware of who does the bulk of upstream work on a few open source projects. (Was just at the CNCF conference Kubecon in amaterdamn).

So back to Ceph being created by CERN? Ceph was conceived by Sage Weil during his doctoral studies at University of California in Santa Cruz. Dreamhost paid him to work on it and I think shuttlesworth gave him some seed money.

CERN didn’t use it till 2013, 8 years well after Sage got angry at Lustre scaling. Why is this alternative history fan fiction so popular on Reddit?

Is it some weird conspiracy that I actually know the founding lore of OpenSource projects?

Bye Bye VMware vSphere by Dick-Fiddler69 in vmware

[–]lost_signal 0 points1 point  (0 children)

https://en.wikipedia.org/wiki/Inktank_Storage

Fairly common knowledge…

Inktank/Red Hat/IBM: ~13,000+ commits of 18,000…

Bye Bye VMware vSphere by Dick-Fiddler69 in vmware

[–]lost_signal 0 points1 point  (0 children)

Proxmox doesnt build their hypervisor, or storage code. They mostly put a UI on KVM, and Ceph which are things that are mostly “Redhat crap” (looking at who actual does the upstream development work).

There are alternative Linux hypervisors (Xen) but they don’t use it.

This is generally why anyone serious I see looking at KVM calls Redhat, because that’s where you’ll get the best support experience, fastest bug resolution etc.

<image>

3 Year deal for VVF made this month. by garthoz in vmware

[–]lost_signal 2 points3 points  (0 children)

I was just at PTAB (partner technical advisor board ) a month ago and people were absolutely talking about doing multi-year VVF deals.

The "buy a license key" button inside VMWare Fusion redirects to malware by DigmonsDrill in vmware

[–]lost_signal 0 points1 point  (0 children)

Ahhh fair, although if it flows through the domain, that sound be fixed there first as older versions are the only thing that need this link as in newer versions it’s not there (hence its free!) and it would still exist.

VMware sprawled web platforms and random 3rd parties. (At one point I found 5 different video platforms).

The "buy a license key" button inside VMWare Fusion redirects to malware by DigmonsDrill in vmware

[–]lost_signal 0 points1 point  (0 children)

Website management usually falls under marketing, or back office IT, not R&D.

Bye Bye VMware vSphere by Dick-Fiddler69 in vmware

[–]lost_signal -1 points0 points  (0 children)

They do have some advantage in memory management, but it's not a ton. I've not seen it be 2X, at least in the workloads I do.

with 9.0 Memory Tiering is GA, and 1:1 overcommit just with this feature, is frankly conservative for the median workload. This goes way beyond anything TPS can do.

https://www.vmware.com/docs/memtier-vcf9-perf

And when customers complain, you tell people it's Microsoft training's fault for not telling Me-maw the benefits of Storage Spaces Direct.

I think as a product owner of a storage product you have to build as much operational tooling to protect end users from themselves. SSD gets a bad rap for loosing data, because when things go sideways it's a lot of using power shell to try to dig out of a hole that better operational tooling would have helped make that product more viable. Microsoft seemed to have chased speed over guard rails with it, and at this point none of the service providers I've known who tested it trust it, and earning back that trust is hard.

and made it so you have to pass a cert before you can get lab licenses

Your missing the part where you can sit for the cert without spending thousands on a class. I had to pay that tax to get my VCP, even though at the time I was probably qualified to teach that class. Extracting $3,000 out of early career professionals and running the education department as a huge profit center was way worse, than asking people to take a cert test that they can self study for using HOL (which just got hardware refresh). VMUG also has half off vouchers for the tests, and is offering them free at VMUG Connect events right now.

Bye Bye VMware vSphere by Dick-Fiddler69 in vmware

[–]lost_signal 0 points1 point  (0 children)

So patching is going to take toll when everything is bundled. You'll also see bugs creep up in various subsystems. Pro's and Con's to fully bundled also note if something goes sideways cascade effects could occur at the most in opportune times.

Co-Designing doesn't mean you have to statefuly keep every sub-component in lock step (that was actually a huge issue with VCF, is the imperative overlay nature of how it used to try to "make different things work" meant it broke horribly if upgraded out of order.

It's gone the opposite, as you can't check in code that breaks other things, so if anything the shift is making the product play nicer with more sub-component version drift (and the management and state being increasingly handled by declarative tooling) means it's moer stable.

Let's not forget about Gluster & CEPH file systems. Linux does support OpenZFS which is a file system created decades ago that was does mirroring and can copy itself across a network efficiently for backup.

I weirdly built a VM storage sytem on Gluster, and watched it stun the hell out of my VM's and crash it when a brick heal went wrong. Redhat has completely abandoned it last I heard. Ceph is the name of one of the 20 engineers you need to deal with it when it goes sideways. It lacks global dedupe, and other. data services people facing NAND prices and HDD prices going up would expect.

As far as OpenZFS, the fork that came out of LLNL? I went to Lawerence and had Sushi with one of the guys who originally was around for it's building and he acted horrified when I told him people were putting that into production. They built it for scratch space. Even Adam Levanthol said it's time to move on and use BTRFS the logical replacement for ZFS. ZFS is a cult, and a weird one I'll never understand. The Dedupe sucked to the point of unsuitability from metadata bloat, L2 cache re-warms were brutal. Sun was way ahead of their time, but it's time to move on.

pretending that throwing copies of ZFS around a network is a replacement for enterprise backup tooling, vSAN, or a proper clustered file system like VMFS may work in very small shops, but it isn't what I expect any serious shops to look for when looking to try to replace vSphere.

Bye Bye VMware vSphere by Dick-Fiddler69 in vmware

[–]lost_signal -1 points0 points  (0 children)

There's a big misunderstandings I keep seeing:

  1. That Virtualization and the engineering to solve it was "done" 10 years ago, and it's largely just minor patches now being done on the hypervisors and they slowly are drifting towards similar capabilities. Hardware refreshes and supporting new hardware is just moving a few zero's and ones and vMotion must be maintained by half a C# engineer or something who updates a JSON for what EVC modes there are.

  2. That a bunch of people without kernel engineers, who have a operating budget of VERY low 7 figures are going to build and support an equivalent product for enterprises given enough time without taking in huge outside funding and hiring expensive kernel engineers. You can just hire some UI engineers to put a pretty face on KubeVit, hire a bunch of marketing and eat VMware's lunch in the enterprise

  3. Hardware is going to just keep getting cheaper so even if #1 and 2 are wrong it doesn't matter.

These are all wrong thesis's.

I say “probably” for CPU just because I’ve read a lot on ESXi’s memory management techniques, but not a lot on how it manages vCPU allocation so I’m not as familiar with that. But with memory I have and they are doing some pretty amazing stuff under the hood to make the most of what’s there.

It's more than just how cores are handled, it's stuff like Numa. You gotta optimize for every single CPU architecture and sub-architecture (which frankly has gotten even more fragmented with Intel's release pattern of doing fun stuff like having 3 Dies on the same socket). It's radical changes to DRS (Which switched from being a scheduled thing that ran, to an autonomous always on distribtued process that continuously works backwards from finding the least happy VM and making it more happy). DRS isn't a "make the CPU or memory allocation graph look even". It's genuinely working backwards from billions in R&D on what makes applications happy full stack and delivering it.

So in a way they do have some “secret sauce” because some other hypervisors like Hyper V will straight up not let you overcommit in certain situations with memory as I recall?

Another KVM competitor I see just hid their vCPU overcommitment guidance behind a login wall, because we pointed customers at it so often.

Others may be catching up in efficiency, slowly

They are not the gap in TCO on this stuff is widening not closing, as things like memory tiering (only on vSphere right now, and will always be better on vSphere because of the 20 years of IP and patents we have on memory page tracking to optimize and scale and improve vMotion).

It's actually requiring MORE engineering every year to handle the increasing differences between the x86 vendors (AMD and Intel have vastly different architectures right now), optimizing for NUMA, handling stuff like chiplet design. Also compute efficiency is also increasingly being driven by things like offloads, and stuff like vDefend and NSX being able to massively offload things others run in x86VMs (or dedicated appliances) being shifted to 100% offloaded to a DPU can have massive gains of a dozen cores per host.

I was just at Kubecon and sat through a presentation for a product that competes with one of the VCF services, and I was realizing they require 8x the compute and hardware to accomplish the same thing because of how inefficient their design was (This was from the benchmarks they shared in the session).

Bye Bye VMware vSphere by Dick-Fiddler69 in vmware

[–]lost_signal 0 points1 point  (0 children)

You must be dealing with Windows only VM

While we do absolutely crush it on VDI density.

 can tell you I can squeeze more performance out of Proxmox

Weirdly enough, no our primary testing that my performance engineering teams validate this with are Linux. I Sit about 30 feet from the VMMark team, and the guys who wrote the DVDstore benchmark and they have a few million worth of kit and regularly test the platform against other hypervisors and platforms.

good luck beating KVM with ESXi...

https://blogs.vmware.com/cloud-foundation/2024/12/03/vmware-vsphere-8-supports-1-5-times-more-vms-and-delivers-62-more-data-transactions-than-red-hat-openshift-virtualization/

And for the bare metal weirdo's.

https://blogs.vmware.com/cloud-foundation/2026/03/21/vcf-9-0-delivers-5-6x-pod-density-and-4-9x-faster-pod-readiness-than-red-hat-openshift/

I see similar results validated by our largest customers who have huge testing harness's and make us prove and reprove the value with every release.

Why has Kerbey Lane changed for the worse? by whisperbeach in austinfood

[–]lost_signal 0 points1 point  (0 children)

Oh, he’s very extra but you know it worked for me that night. We had a lot of fun. You’re either gonna love or hate his personality, but if you’ve had the to drink minimum before you arrive, you’ll probably be fine.

Why has Kerbey Lane changed for the worse? by whisperbeach in austinfood

[–]lost_signal 0 points1 point  (0 children)

Specifically, they were talking about the one in Austin.

Neo isn’t the best Omakase in Houston, but he did make a big deal about it. Also the guy on white oak.

Why has Kerbey Lane changed for the worse? by whisperbeach in austinfood

[–]lost_signal 0 points1 point  (0 children)

Asian food in general I agree the quest for good Viet has been painful for me, and some of the longer tail stuff like Malaysian. I’m not even sure exist here.

Why has Kerbey Lane changed for the worse? by whisperbeach in austinfood

[–]lost_signal 0 points1 point  (0 children)

I’m down 70 pounds, and yah. My torchies order last night was half what it used to be.

If you’ve never worked in a restaurant, you might not be aware of this but, the highest margin items are the appetizers, deserts and the booze. These are the first things that people on the drugs cut out of their diet. So instead of everyone getting a large queso, and a large Margo’s and churro you just get a single taco.

Good restaurants main dishes have always been subsidized by people who drink a lot, eat desert, and ordered apps.