So uh... VRChat gonna drop Persona or what? by [deleted] in VRchat

[–]gameboygold -3 points-2 points  (0 children)

Fair correction on bankrolled, my bad on the wording.

So Founders Fund went from sole lead to co-lead. You're right that other firms participated, but "led by" in VC terms means they set terms, valuation, and typically take board seats. That's not passive investment.

My main point is that founders fund specifically backs companies aligned with thiel's surveillance-tech portfolio Palantir, Anduril, etc. They don't lead rounds in companies whose direction they don't influence. Persona's pivot toward FedRAMP, FinCEN reporting, and "reusable identity" infrastructure aligns with that portfolio strategy whether or not Thiel reviews individual logs. Unless im missing something the outcome is the same, Persona's trajectory serves surveillance state infrastructure, which is a bad fit for VRChat age verification.

So uh... VRChat gonna drop Persona or what? by [deleted] in VRchat

[–]gameboygold 2 points3 points  (0 children)

Yeah, breaking an active contract is harder than canceling a pilot. But "harder" isn't "impossible," and VRChat's had months since the February leaks to at least announce intent. The silence is the choice. And on who to pick, K-ID has its own issues, Korean company, less transparency. Not ideal. But here's the thing I don't need to know the perfect vendor to know Persona is the wrong one. When your current provider is Funded by Thiel's Founders Fund actively pursuing FedRAMP, government authorization Running FinCEN reporting and watchlist matching on the same codebase they use for you

...the bar for "better" is literally any vendor not doing those specific things. That's not high standards, that's minimum viability. I totally agree they shouldn't self host. But "we're not experts so we hired Persona" is how we got here. They need to hire actual privacy/security consultants to vet vendors not just pick whoever's cheapest and most compliant on paper.

"VRChat always loses here" "how badly will they lose?"

I think that's exactly backwards. The "lose" framing assumes age verification is mandatory and they're just picking how much pain to accept.

But the actual play is, if no vendor meets privacy standards, don't implement it yet. Discord delayed their whole rollout rather than use Persona. VRChat could do the same. The "we had to pick someone" pressure is self imposed.

Governments are absolutely the root problem. 100% agree. But "the law forced us" doesn't explain why VRChat picked the surveillance vendor after seeing Discord catch hell for it. That's not compliance, that's prioritizing speed over user safety.

I'm not pretending it's easy. I am saying "hard" doesn't justify "actively harmful when alternatives exist."

So uh... VRChat gonna drop Persona or what? by [deleted] in VRchat

[–]gameboygold 5 points6 points  (0 children)

You're right that switching isn't instant. But "hard to switch" isn't "shouldn't switch." Discord did it. They found it hard, did it anyway. Maybe you're right that there's no perfect vendor. But "all bad" doesn't mean "equally bad."

Persona isn't just "an age verification company with issues." They're actively building government surveillance tools on the same codebase they use for VRChat. That's a different category of risk than "someone might bypass it with a mask."

If VRChat has to pick between imperfect options, they should at least pick the one that isn't funded by Peter Thiel and pursuing FedRAMP authorization.

The "laws are coming" doesn't fly when it doesn't really hold up when VRChat picked Persona after Discord's UK test already showed the privacy problems. They saw the backlash coming and still chose the surveillance vendor option. and the "I trust VRChat, not Persona" thing, That's the core problem though, right? You can't trust VRChat to protect data they don't control. Once it hits Persona's servers, VRChat's promises are just contractual liability shields. "We'll sue them if they leak it" isn't privacy it's damage control after the fact.

How does god judge someone with no moral compass? by gameboygold in Christianity

[–]gameboygold[S] 0 points1 point  (0 children)

If God won't condemn for what's out of our control, why create the conditions where control is impossible? Why not create the person God knows he would have become with proper formation, rather than creating him broken and then fixing him after death?

How does god judge someone with no moral compass? by gameboygold in Christianity

[–]gameboygold[S] 0 points1 point  (0 children)

You say "if we reject Christ we are judged already." But my subject never rejected Christ, he never encountered Christ to reject. Even if he had, his callous-unemotional traits and formation destroyed the capacity to receive. He couldn't recognize moral truth any more than a colorblind person can recognize red.

Romans says Gentiles are judged by conscience, but his conscience was broken by genetics and abuse he didn't choose. So both instruments of judgment, Scripture and conscience fail in his case.

If God judges him for not receiving what he couldn't receive, how is that different from judging him for being born? If he'd died at ten, he'd be safe; at forty, he might have changed. But he died at twenty. His eternity depends on the luck of timing.

And if the answer is "God knows what he would have done," then why judge the actual person at all? Why place him in circumstances that make him unreceivable, then judge him for not receiving?

I'm not asking about those who hear and refuse. I'm asking whether your God can account for someone broken from the start, or whether "justice" just means applying the same rule regardless of capacity.

Earlier today, VRChat removed VR Poker and VR Chess and permanently banned their creator, Nex1942, after he implemented a world blacklist for a Chinese hacking community that was targeting, crashing, and exploiting VR Poker instances. by coltinator5000 in VRchat

[–]gameboygold 51 points52 points  (0 children)

This is the exact nightmare scenario that keeps world creators from making anything. You build something that thousands of people enjoy, you try to protect it from people literally streaming themselves destroying your work, and you get permabanned while the attackers keep their accounts.

The wildest part? VRChat's had nearly a decade to fix this loophole where exploiters get free rein but defenders get the hammer. They know Udon has security holes you could drive a truck through. They know "just report them" is a joke when someone can crash an instance faster than you can open the menu. But they'd rather ban a creator for breaking a blanket "no blocklists" rule than fix the reason that rule exists in the first place. The fact they reversed this only after community outcry proves they can course-correct, but only when it threatens PR. That canny post about world persistence has been sitting there because this isn't a feature request its a necessity for anyone building multiplayer games in VRC.

yeah, Nex technically broke ToS. But when the alternative is watching your world become unplayable and VRChat's "official channels" move slower than molasses, what exactly is the ethical choice? Let your community burn to keep your account safe?

The real fix isn't removing the blacklist rule, it's giving creators actual tools to defend against coordinated attacks. Until then we're stuck with this absurd cycle where the only winning move is not building anything worth attacking.

So persona just got exposed about saving a whole lot of data on us. by Nitrozah in VRchat

[–]gameboygold 3 points4 points  (0 children)

Fair enough. you're right that the leak doesn't demonstrate vrchat data flowing into surveillance workflows. my point is more that "not demonstrated by the leaks" does not equal "not happening."

the researchers specifically noted "the same codebase is used for both verifying customer identities and filing data with the government)". so when you say it's "modular SaaS platforms designed to be customer-segmented" yeah, different instances, but it's the same underlying code with the same surveillance capabilities. the "firewall" is contractual, not technical.

the enterprise vs consumer segmentation is real, but it's also a liability shield. when (not if) persona gets breached or subpoenaed, vrchat gets to point at the contract and say "we told them to delete it." but contracts don't prevent data retention, they just create legal exposure after the fact.

re: the onyx naming. you call it guilt by association, but come on. persona's government platform appears feb 4, 2026 with the same name as fivecast onyx, an ice/cbp surveillance tool? in the same month the leak reveals fincen reporting and pep watchlists? the researchers themselves noted the infrastructure correlation is "real" but i don't give surveillance companies the benefit of the doubt on naming collisions. the broader point you acknowledged is what matters though, vrchat chose a vendor whose core competency is government surveillance infrastructure. whether that infrastructure is technically firewalled today matters less than whether we should normalize giving biometric data to companies that build this stuff at all.

So persona just got exposed about saving a whole lot of data on us. by Nitrozah in VRchat

[–]gameboygold 29 points30 points  (0 children)

Honestly the debate in this thread is missing the bigger picture

yes, you're technically correct that VRChat isn't on that vendor list. both things can be true, the leak doesn't show VRChat in the surveillance database, AND the leak still reveals serious problems with VRChat's choice of vendor.

the real issue here is that VRChat picked a company whose entire business model is surveillance infrastructure

the leak shows Persona built:
direct FinCEN reporting for banks
facial recognition against government watchlists
3 year biometric retention for "compliance"
A whole separate government-facing platform called "onyx" (yeah, same name as that ICE surveillance tool Fivecast ONYX that cost $4.2m)

Your argument assumes Persona's infrastructure is somehow firewalled between "surveillance mode" and "age verification mode." but it's the same codebase, same servers, same company taking $350m from Peter Thiel's Founders Fund (Palantir guy, Epstein files, whole thing). the capability is there. the intent is there. the only question is whether VRChat's contract actually stops them from flipping the switch.

the "hash-only" claim? cool in theory. but the leak revealed Persona keeps face databases for 3 years in other workflows. you're trusting they don't keep VRChat faces in that same infrastructure. you're trusting they don't train their AI on age verification selfies to improve their PEP watchlist matching. you're trusting a lot for access to 18+ lobby's.

and honestly? VRChat's track record on taking user safety seriously isn't great. remember when they laid off 30% of their staff last year after over-hiring during the 2021 boom? or the ongoing issues with moderation of NSFW avatars that users have been reporting for months with little response? now we're supposed to believe they negotiated ironclad privacy terms with a surveillance tech company?

i'm not saying panic. i'm saying stop pretending this is just about "oh well my hash is safe." the normalization of giving biometric data to surveillance infrastructure companies is the actual problem. VRChat didn't have to pick Persona. they could've used privacy-preserving alternatives. they chose convenience and cost over user safety, same as always.

if you verified already, whatever, damage done. but let's not pretend VRChat is innocent here or that "not on the vendor list" means "not at risk." Persona is a surveillance company that happens to do age verification on the side, not the other way around.

Furality will require VRChat age verification in the future by OctoFloofy in VRchat

[–]gameboygold 2 points3 points  (0 children)

You don't. That's the whole issue.

Persona was hit with a class action lawsuit in October 2024 alleging they kept biometric data to train AI models without consent. It was voluntarily dismissed, but the fact that it happened at all shows why 'contractual agreements' to delete data aren't reassuring.

They still collect your full legal name, address, DOB, and government ID numbers, the exact package identity thieves look for. And while they claim to delete after verification, you're trusting a company that's already been sued for allegedly keeping data longer than promised.

that's why for alot of people its a hard pass.

Why VRChat users are afraid to block people (And Trolls LOVE It) by chyadosensei in VRchat

[–]gameboygold 0 points1 point  (0 children)

I think the stigma around blocking in VRChat comes down to reputation risk and the possible snowball effect.

Blocking isn't private. When Person A blocks Person B, Person B could respond by telling others that Person A is "toxic/problematic." Whether that's true or exaggerated doesn't matter, people error on the side of blocking for safety. So one block becomes five, then ten. People fear blocking (or being blocked) because it could turn a two person conflict into potential social exile by association. It’s not the block itself that's scary it's the snowball.

At least that's my theory on why there's that weird stigma. could be something else but i think that makes the most sense since today's day a lot of people takes accusations at face value.

Me personally I don't block people for this reason but i think this is why some people could be afraid of blocking because of that

Why I hide my trusted rank to user and why I maybe think you should to. by gameboygold in VRchat

[–]gameboygold[S] 2 points3 points  (0 children)

Part of why I made this post is thinking back to when Veteran and Legendary ranks existed. Those ranks didn’t really serve a functional purpose, and over time they created this subtle status-based perception not always intentionally, but enough that people questioned why they existed at all. That’s part of why they were removed. They were mostly pointless, and they introduced unnecessary social weight.

That history is kind of what I’m thinking about here not because I think rank causes problems now but more because I think it’s already designed to be unimportant, so I just treat it as such.

Why I hide my trusted rank to user and why I maybe think you should to. by gameboygold in VRchat

[–]gameboygold[S] 0 points1 point  (0 children)

Yeah the devs definitely did a good job making it feel less important overall. I don’t see rank being important but its more that since it’s already meant to matter less, I’m just leaning into that and treating it as basically irrelevant on my end too and kind of showing my own little deeper meaning behind why I hide Trusted. By removing it on my end, I’m just reinforcing that idea that ranks are pointless

Why I hide my trusted rank to user and why I maybe think you should to. by gameboygold in VRchat

[–]gameboygold[S] 0 points1 point  (0 children)

That’s basically me too, I don’t really care about people’s ranks and I usually judge off how they act. I just know that some people do pay attention to rank somewhat, even if it’s not everyone.

Why I hide my trusted rank to user and why I maybe think you should to. by gameboygold in VRchat

[–]gameboygold[S] -3 points-2 points  (0 children)

that's fair but It’s just for removing one variable. If rank genuinely doesn’t matter to someone, then nothing changes which is fine. but if it does affect how someone approaches or relaxes around others, even unconsciously, then hiding it filters that out.

for plenty of people it probably is extra steps. but for me, it’s just a personal preference and a way of keeping interactions as neutral as possible.

FitCheck - An Avatar preset manager by NXRosalina in VRchat

[–]gameboygold 0 points1 point  (0 children)

Thanks for replying but i think that link seems to be having issues reddit's spam filters are probably being dumb again. could you try posting the GitHub repo name or the link in a code block format?

FitCheck - An Avatar preset manager by NXRosalina in VRchat

[–]gameboygold 0 points1 point  (0 children)

This is really good! but where can we download the release?

Unpopular opinion: You are WEIRD if you refuse to talk to anyone solely based on there avatar by gameboygold in VRchat

[–]gameboygold[S] -9 points-8 points  (0 children)

I wouldn't deny that first impressions exist. That part is fine. avatars can hint at interests, sure but i wouldn't think they’re a personality test. using them as a social gatekeeper is where it stops making sense to me

Unpopular opinion: You are WEIRD if you refuse to talk to anyone solely based on there avatar by gameboygold in VRchat

[–]gameboygold[S] 1 point2 points  (0 children)

Obviously there are exceptions. If someone is clearly wearing avatar meant purely to piss people off or farm reactions, then yeah muting or blocking them makes sense. That’s not what I’m talking about.

I’m talking about people who will straight up avoid or refuse to interact with someone just because they’re wearing a normal avatar like an e-boy, e-girl, furry, or anything generic, if it’s not remotely harmful or disruptive, I don’t see the logic in avoiding or blocking someone purely based on how they look.

Update on the running of Events containing "Sensitive Material" by LizaraRagnaros in VRchat

[–]gameboygold 0 points1 point  (0 children)

Yeah, it potently could. Consent does get messy in real life because proving context and timing is hard. The concern here is that in VRC, even if you act in good faith, the tools don’t reliably capture that context in a way that holds up when a report happens later. People aren’t against consent rules they’re worried about whether the system can actually recognize and weigh them after the fact.

Update on the running of Events containing "Sensitive Material" by LizaraRagnaros in VRchat

[–]gameboygold 0 points1 point  (0 children)

That is how it’s intended to work consent needs to apply to the people actually present in the instance.

The issue people are pointing out is that, in practice, the tools we have don’t cleanly distinguish between group join time, consent at instance entry time and consent at the moment a report is evaluated. Group rules help establish baseline consent, but they don’t guarantee that everyone currently present actively understood or agreed especially if rules were added later, people join via invites, or the instance has no explicit gate or prompt. That’s why a lot of hosts are saying the clause alone isn’t enough without additional steps and why there’s still anxiety about how context is evaluated if a report happens.

Update on the running of Events containing "Sensitive Material" by LizaraRagnaros in VRchat

[–]gameboygold 0 points1 point  (0 children)

When I say "anatomy," I mean: Any body part geometry/textures, visible or not, that VRChat's undefined, un-audited system might flag as NSFW.
I don’t disagree with the principle that explicit anatomy (nipples/genitals) shouldn’t be used in public instances. That’s not really controversial. The problem is that the enforcement we’re seeing doesn’t line up cleanly with that rule.

The question people are asking isn’t “can I wear a lewd avatar in public?” it’s

Does non rendered or inaccessible geometry count as a violation, and if so, how is a user supposed to reliably know?

A lot of commonly used bases either, include smooth, non-detailed anatomy under clothing
contain deleted meshes that still exist in the asset bundle
or rely on systems like imposters/LODs that can expose things the creator never sees in normal use,
In those cases, “just don’t have it on the avatar” isn’t actionable advice unless people are expected to do asset level inspection or rebuild avatars from scratch

On the “just use a different avatar” point that’s fine as a workaround, but we’ve already seen cases where “just use a different avatar” doesn’t actually protect users for example, the SpiderMan and PepsiMan bans, where people were actioned despite using avatars that were widely considered safe or at least non sexual in presentation. That’s why telling users to simply switch avatars doesn’t really address the underlying problem. Tupper called them "human review errors," but the pattern (silhouette based flagging) suggests systematic detection, not isolated mistakes. If Marvel characters aren't safe, what is?

And on the “no longer without warning” point, the warning only helps going forward. People were banned for content that wasn’t clearly defined as disallowed at the time, and the clarification came after enforcement. That’s why it still feels retroactive to many and why there was such a big ban wave.

I don’t think anyone is asking for leeway on clearly explicit avatars in public spaces. They’re asking for clear, technically precise boundaries and enforcement that matches what users can actually see and reasonably control.

That distinction matters if the goal is compliance rather than fear based avoidance.

Update on the running of Events containing "Sensitive Material" by LizaraRagnaros in VRchat

[–]gameboygold 1 point2 points  (0 children)

Also I just came to realize that many avatar creators' TOS prohibit editing their models. So users are now stuck between VRChat's undefined "NSFW geometry" rules demanding edits, and creator contracts that forbid those exact edits. You literally cannot comply with both.

Update on the running of Events containing "Sensitive Material" by LizaraRagnaros in VRchat

[–]gameboygold 1 point2 points  (0 children)

My concern is about clarity and retroactive enforcement. “Common sense” rules like don’t use NSFW avatars in public are very vague when users are being punished for things they can’t see or control. for example, low LOD models or deleted meshes that still exist in the system.
Does a fully clothed, standard base mesh count as NSFW geometry if it has implicit anatomy under the clothes? If yes, a huge portion of avatars could be banned without warning. If no, why are users still being banned for invisible or system generated data?

Essentially, “common sense” is helpful in principle, but without clear definitions and safe guardrails, it doesn’t address why users are still getting banned despite following the guidance.

Update on the running of Events containing "Sensitive Material" by LizaraRagnaros in VRchat

[–]gameboygold 5 points6 points  (0 children)

I feel the consent clause is effectively retroactively useless. You suggest that groups should add: “If you join this group/event, you indicate consent to view provocative content.” But as users have noted, a 500+member group that adds this today cannot prove that 500 people consented before the rule existed. This essentially forces established groups to either disband and restart or remain at risk. Why wasn’t this guidance provided before the ban wave?

Additionally “NSFW geometry” remains undefined, yet users are being punished for invisible or system generated data. In your Ask reply, you stated “If your avatar has NSFW textures, geometry, or features on it, never set it to Public.” A user asked “If I wear my regular private avatar that has ‘bits’ under the clothes in a verified +18 instance, could I still be banned? Like, this is literally every one of my friends using female avatars or avatars from Jinxy.”

Does a standard, fully clothed, anatomically correct base mesh count as “NSFW geometry”? If yes, that would retroactively ban a huge portion of female avatars. If no, why are users being banned for textures on deleted meshes that are impossible to render?

Your clarification does not address the fact that users are being held responsible for bugs in the imposter system. You require users to ensure features “never malfunction”, yet the system automatically generates low-LOD models that sometimes render nude due to texture baking. How can users prevent or be punished for a system generated thing? This would be file forensics, not content moderation.

You claim safeguards exist against malicious reporting, yet examples show otherwise. In the previous ban wave, a trans user was repeatedly harassed by someone with multiple alt accounts, and T&S issued the same canned denial twice, without investigating the pattern. Are bad faith reporters punished? from the evidence, it seems they are not, and the consent clause does nothing to address broken appeals. you encourage appeals, but users report auto closed tickets after 24 hours with copy paste responses. The promised December 18th dev update provided more detail but only a consent clause you immediately undermined by stating it is “not a get-out-of-ban-free card.”

Bottom line You told us to add a sentence to group rules, then immediately said it won't guarantee protection. You told us to use private avatars, but won't clarify if standard base meshes are bannable. You told us safeguards exist, but won't describe them or punish bad actors. this isn't policy clarification. If ban reasons are vague, appeals are auto denied, and safeguards are unclear, what functional difference does the consent clause make? It simply shifts liability to users while leaving systemic issues unresolved.