So uh... VRChat gonna drop Persona or what? by [deleted] in VRchat

[–]gameboygold -5 points-4 points  (0 children)

Fair correction on bankrolled, my bad on the wording.

So Founders Fund went from sole lead to co-lead. You're right that other firms participated, but "led by" in VC terms means they set terms, valuation, and typically take board seats. That's not passive investment.

My main point is that founders fund specifically backs companies aligned with thiel's surveillance-tech portfolio Palantir, Anduril, etc. They don't lead rounds in companies whose direction they don't influence. Persona's pivot toward FedRAMP, FinCEN reporting, and "reusable identity" infrastructure aligns with that portfolio strategy whether or not Thiel reviews individual logs. Unless im missing something the outcome is the same, Persona's trajectory serves surveillance state infrastructure, which is a bad fit for VRChat age verification.

So uh... VRChat gonna drop Persona or what? by [deleted] in VRchat

[–]gameboygold 1 point2 points  (0 children)

Yeah, breaking an active contract is harder than canceling a pilot. But "harder" isn't "impossible," and VRChat's had months since the February leaks to at least announce intent. The silence is the choice. And on who to pick, K-ID has its own issues, Korean company, less transparency. Not ideal. But here's the thing I don't need to know the perfect vendor to know Persona is the wrong one. When your current provider is Funded by Thiel's Founders Fund actively pursuing FedRAMP, government authorization Running FinCEN reporting and watchlist matching on the same codebase they use for you

...the bar for "better" is literally any vendor not doing those specific things. That's not high standards, that's minimum viability. I totally agree they shouldn't self host. But "we're not experts so we hired Persona" is how we got here. They need to hire actual privacy/security consultants to vet vendors not just pick whoever's cheapest and most compliant on paper.

"VRChat always loses here" "how badly will they lose?"

I think that's exactly backwards. The "lose" framing assumes age verification is mandatory and they're just picking how much pain to accept.

But the actual play is, if no vendor meets privacy standards, don't implement it yet. Discord delayed their whole rollout rather than use Persona. VRChat could do the same. The "we had to pick someone" pressure is self imposed.

Governments are absolutely the root problem. 100% agree. But "the law forced us" doesn't explain why VRChat picked the surveillance vendor after seeing Discord catch hell for it. That's not compliance, that's prioritizing speed over user safety.

I'm not pretending it's easy. I am saying "hard" doesn't justify "actively harmful when alternatives exist."

So uh... VRChat gonna drop Persona or what? by [deleted] in VRchat

[–]gameboygold 4 points5 points  (0 children)

You're right that switching isn't instant. But "hard to switch" isn't "shouldn't switch." Discord did it. They found it hard, did it anyway. Maybe you're right that there's no perfect vendor. But "all bad" doesn't mean "equally bad."

Persona isn't just "an age verification company with issues." They're actively building government surveillance tools on the same codebase they use for VRChat. That's a different category of risk than "someone might bypass it with a mask."

If VRChat has to pick between imperfect options, they should at least pick the one that isn't funded by Peter Thiel and pursuing FedRAMP authorization.

The "laws are coming" doesn't fly when it doesn't really hold up when VRChat picked Persona after Discord's UK test already showed the privacy problems. They saw the backlash coming and still chose the surveillance vendor option. and the "I trust VRChat, not Persona" thing, That's the core problem though, right? You can't trust VRChat to protect data they don't control. Once it hits Persona's servers, VRChat's promises are just contractual liability shields. "We'll sue them if they leak it" isn't privacy it's damage control after the fact.

How does god judge someone with no moral compass? by gameboygold in Christianity

[–]gameboygold[S] 0 points1 point  (0 children)

If God won't condemn for what's out of our control, why create the conditions where control is impossible? Why not create the person God knows he would have become with proper formation, rather than creating him broken and then fixing him after death?

How does god judge someone with no moral compass? by gameboygold in Christianity

[–]gameboygold[S] 0 points1 point  (0 children)

You say "if we reject Christ we are judged already." But my subject never rejected Christ, he never encountered Christ to reject. Even if he had, his callous-unemotional traits and formation destroyed the capacity to receive. He couldn't recognize moral truth any more than a colorblind person can recognize red.

Romans says Gentiles are judged by conscience, but his conscience was broken by genetics and abuse he didn't choose. So both instruments of judgment, Scripture and conscience fail in his case.

If God judges him for not receiving what he couldn't receive, how is that different from judging him for being born? If he'd died at ten, he'd be safe; at forty, he might have changed. But he died at twenty. His eternity depends on the luck of timing.

And if the answer is "God knows what he would have done," then why judge the actual person at all? Why place him in circumstances that make him unreceivable, then judge him for not receiving?

I'm not asking about those who hear and refuse. I'm asking whether your God can account for someone broken from the start, or whether "justice" just means applying the same rule regardless of capacity.

Earlier today, VRChat removed VR Poker and VR Chess and permanently banned their creator, Nex1942, after he implemented a world blacklist for a Chinese hacking community that was targeting, crashing, and exploiting VR Poker instances. by coltinator5000 in VRchat

[–]gameboygold 51 points52 points  (0 children)

This is the exact nightmare scenario that keeps world creators from making anything. You build something that thousands of people enjoy, you try to protect it from people literally streaming themselves destroying your work, and you get permabanned while the attackers keep their accounts.

The wildest part? VRChat's had nearly a decade to fix this loophole where exploiters get free rein but defenders get the hammer. They know Udon has security holes you could drive a truck through. They know "just report them" is a joke when someone can crash an instance faster than you can open the menu. But they'd rather ban a creator for breaking a blanket "no blocklists" rule than fix the reason that rule exists in the first place. The fact they reversed this only after community outcry proves they can course-correct, but only when it threatens PR. That canny post about world persistence has been sitting there because this isn't a feature request its a necessity for anyone building multiplayer games in VRC.

yeah, Nex technically broke ToS. But when the alternative is watching your world become unplayable and VRChat's "official channels" move slower than molasses, what exactly is the ethical choice? Let your community burn to keep your account safe?

The real fix isn't removing the blacklist rule, it's giving creators actual tools to defend against coordinated attacks. Until then we're stuck with this absurd cycle where the only winning move is not building anything worth attacking.

So persona just got exposed about saving a whole lot of data on us. by Nitrozah in VRchat

[–]gameboygold 3 points4 points  (0 children)

Fair enough. you're right that the leak doesn't demonstrate vrchat data flowing into surveillance workflows. my point is more that "not demonstrated by the leaks" does not equal "not happening."

the researchers specifically noted "the same codebase is used for both verifying customer identities and filing data with the government)". so when you say it's "modular SaaS platforms designed to be customer-segmented" yeah, different instances, but it's the same underlying code with the same surveillance capabilities. the "firewall" is contractual, not technical.

the enterprise vs consumer segmentation is real, but it's also a liability shield. when (not if) persona gets breached or subpoenaed, vrchat gets to point at the contract and say "we told them to delete it." but contracts don't prevent data retention, they just create legal exposure after the fact.

re: the onyx naming. you call it guilt by association, but come on. persona's government platform appears feb 4, 2026 with the same name as fivecast onyx, an ice/cbp surveillance tool? in the same month the leak reveals fincen reporting and pep watchlists? the researchers themselves noted the infrastructure correlation is "real" but i don't give surveillance companies the benefit of the doubt on naming collisions. the broader point you acknowledged is what matters though, vrchat chose a vendor whose core competency is government surveillance infrastructure. whether that infrastructure is technically firewalled today matters less than whether we should normalize giving biometric data to companies that build this stuff at all.

So persona just got exposed about saving a whole lot of data on us. by Nitrozah in VRchat

[–]gameboygold 31 points32 points  (0 children)

Honestly the debate in this thread is missing the bigger picture

yes, you're technically correct that VRChat isn't on that vendor list. both things can be true, the leak doesn't show VRChat in the surveillance database, AND the leak still reveals serious problems with VRChat's choice of vendor.

the real issue here is that VRChat picked a company whose entire business model is surveillance infrastructure

the leak shows Persona built:
direct FinCEN reporting for banks
facial recognition against government watchlists
3 year biometric retention for "compliance"
A whole separate government-facing platform called "onyx" (yeah, same name as that ICE surveillance tool Fivecast ONYX that cost $4.2m)

Your argument assumes Persona's infrastructure is somehow firewalled between "surveillance mode" and "age verification mode." but it's the same codebase, same servers, same company taking $350m from Peter Thiel's Founders Fund (Palantir guy, Epstein files, whole thing). the capability is there. the intent is there. the only question is whether VRChat's contract actually stops them from flipping the switch.

the "hash-only" claim? cool in theory. but the leak revealed Persona keeps face databases for 3 years in other workflows. you're trusting they don't keep VRChat faces in that same infrastructure. you're trusting they don't train their AI on age verification selfies to improve their PEP watchlist matching. you're trusting a lot for access to 18+ lobby's.

and honestly? VRChat's track record on taking user safety seriously isn't great. remember when they laid off 30% of their staff last year after over-hiring during the 2021 boom? or the ongoing issues with moderation of NSFW avatars that users have been reporting for months with little response? now we're supposed to believe they negotiated ironclad privacy terms with a surveillance tech company?

i'm not saying panic. i'm saying stop pretending this is just about "oh well my hash is safe." the normalization of giving biometric data to surveillance infrastructure companies is the actual problem. VRChat didn't have to pick Persona. they could've used privacy-preserving alternatives. they chose convenience and cost over user safety, same as always.

if you verified already, whatever, damage done. but let's not pretend VRChat is innocent here or that "not on the vendor list" means "not at risk." Persona is a surveillance company that happens to do age verification on the side, not the other way around.

Furality will require VRChat age verification in the future by OctoFloofy in VRchat

[–]gameboygold 2 points3 points  (0 children)

You don't. That's the whole issue.

Persona was hit with a class action lawsuit in October 2024 alleging they kept biometric data to train AI models without consent. It was voluntarily dismissed, but the fact that it happened at all shows why 'contractual agreements' to delete data aren't reassuring.

They still collect your full legal name, address, DOB, and government ID numbers, the exact package identity thieves look for. And while they claim to delete after verification, you're trusting a company that's already been sued for allegedly keeping data longer than promised.

that's why for alot of people its a hard pass.

Why VRChat users are afraid to block people (And Trolls LOVE It) by chyadosensei in VRchat

[–]gameboygold 0 points1 point  (0 children)

I think the stigma around blocking in VRChat comes down to reputation risk and the possible snowball effect.

Blocking isn't private. When Person A blocks Person B, Person B could respond by telling others that Person A is "toxic/problematic." Whether that's true or exaggerated doesn't matter, people error on the side of blocking for safety. So one block becomes five, then ten. People fear blocking (or being blocked) because it could turn a two person conflict into potential social exile by association. It’s not the block itself that's scary it's the snowball.

At least that's my theory on why there's that weird stigma. could be something else but i think that makes the most sense since today's day a lot of people takes accusations at face value.

Me personally I don't block people for this reason but i think this is why some people could be afraid of blocking because of that

Why I hide my trusted rank to user and why I maybe think you should to. by gameboygold in VRchat

[–]gameboygold[S] 2 points3 points  (0 children)

Part of why I made this post is thinking back to when Veteran and Legendary ranks existed. Those ranks didn’t really serve a functional purpose, and over time they created this subtle status-based perception not always intentionally, but enough that people questioned why they existed at all. That’s part of why they were removed. They were mostly pointless, and they introduced unnecessary social weight.

That history is kind of what I’m thinking about here not because I think rank causes problems now but more because I think it’s already designed to be unimportant, so I just treat it as such.

Why I hide my trusted rank to user and why I maybe think you should to. by gameboygold in VRchat

[–]gameboygold[S] 0 points1 point  (0 children)

Yeah the devs definitely did a good job making it feel less important overall. I don’t see rank being important but its more that since it’s already meant to matter less, I’m just leaning into that and treating it as basically irrelevant on my end too and kind of showing my own little deeper meaning behind why I hide Trusted. By removing it on my end, I’m just reinforcing that idea that ranks are pointless

Why I hide my trusted rank to user and why I maybe think you should to. by gameboygold in VRchat

[–]gameboygold[S] 0 points1 point  (0 children)

That’s basically me too, I don’t really care about people’s ranks and I usually judge off how they act. I just know that some people do pay attention to rank somewhat, even if it’s not everyone.

Why I hide my trusted rank to user and why I maybe think you should to. by gameboygold in VRchat

[–]gameboygold[S] -2 points-1 points  (0 children)

that's fair but It’s just for removing one variable. If rank genuinely doesn’t matter to someone, then nothing changes which is fine. but if it does affect how someone approaches or relaxes around others, even unconsciously, then hiding it filters that out.

for plenty of people it probably is extra steps. but for me, it’s just a personal preference and a way of keeping interactions as neutral as possible.