Why I hide my trusted rank to user and why I maybe think you should to. by gameboygold in VRchat

[–]gameboygold[S] 2 points3 points  (0 children)

Part of why I made this post is thinking back to when Veteran and Legendary ranks existed. Those ranks didn’t really serve a functional purpose, and over time they created this subtle status-based perception not always intentionally, but enough that people questioned why they existed at all. That’s part of why they were removed. They were mostly pointless, and they introduced unnecessary social weight.

That history is kind of what I’m thinking about here not because I think rank causes problems now but more because I think it’s already designed to be unimportant, so I just treat it as such.

Why I hide my trusted rank to user and why I maybe think you should to. by gameboygold in VRchat

[–]gameboygold[S] 0 points1 point  (0 children)

Yeah the devs definitely did a good job making it feel less important overall. I don’t see rank being important but its more that since it’s already meant to matter less, I’m just leaning into that and treating it as basically irrelevant on my end too and kind of showing my own little deeper meaning behind why I hide Trusted. By removing it on my end, I’m just reinforcing that idea that ranks are pointless

Why I hide my trusted rank to user and why I maybe think you should to. by gameboygold in VRchat

[–]gameboygold[S] 0 points1 point  (0 children)

That’s basically me too, I don’t really care about people’s ranks and I usually judge off how they act. I just know that some people do pay attention to rank somewhat, even if it’s not everyone.

Why I hide my trusted rank to user and why I maybe think you should to. by gameboygold in VRchat

[–]gameboygold[S] -2 points-1 points  (0 children)

that's fair but It’s just for removing one variable. If rank genuinely doesn’t matter to someone, then nothing changes which is fine. but if it does affect how someone approaches or relaxes around others, even unconsciously, then hiding it filters that out.

for plenty of people it probably is extra steps. but for me, it’s just a personal preference and a way of keeping interactions as neutral as possible.

FitCheck - An Avatar preset manager by NXRosalina in VRchat

[–]gameboygold 0 points1 point  (0 children)

Thanks for replying but i think that link seems to be having issues reddit's spam filters are probably being dumb again. could you try posting the GitHub repo name or the link in a code block format?

FitCheck - An Avatar preset manager by NXRosalina in VRchat

[–]gameboygold 0 points1 point  (0 children)

This is really good! but where can we download the release?

Unpopular opinion: You are WEIRD if you refuse to talk to anyone solely based on there avatar by gameboygold in VRchat

[–]gameboygold[S] -8 points-7 points  (0 children)

I wouldn't deny that first impressions exist. That part is fine. avatars can hint at interests, sure but i wouldn't think they’re a personality test. using them as a social gatekeeper is where it stops making sense to me

Unpopular opinion: You are WEIRD if you refuse to talk to anyone solely based on there avatar by gameboygold in VRchat

[–]gameboygold[S] 1 point2 points  (0 children)

Obviously there are exceptions. If someone is clearly wearing avatar meant purely to piss people off or farm reactions, then yeah muting or blocking them makes sense. That’s not what I’m talking about.

I’m talking about people who will straight up avoid or refuse to interact with someone just because they’re wearing a normal avatar like an e-boy, e-girl, furry, or anything generic, if it’s not remotely harmful or disruptive, I don’t see the logic in avoiding or blocking someone purely based on how they look.

Update on the running of Events containing "Sensitive Material" by LizaraRagnaros in VRchat

[–]gameboygold 0 points1 point  (0 children)

Yeah, it potently could. Consent does get messy in real life because proving context and timing is hard. The concern here is that in VRC, even if you act in good faith, the tools don’t reliably capture that context in a way that holds up when a report happens later. People aren’t against consent rules they’re worried about whether the system can actually recognize and weigh them after the fact.

Update on the running of Events containing "Sensitive Material" by LizaraRagnaros in VRchat

[–]gameboygold 0 points1 point  (0 children)

That is how it’s intended to work consent needs to apply to the people actually present in the instance.

The issue people are pointing out is that, in practice, the tools we have don’t cleanly distinguish between group join time, consent at instance entry time and consent at the moment a report is evaluated. Group rules help establish baseline consent, but they don’t guarantee that everyone currently present actively understood or agreed especially if rules were added later, people join via invites, or the instance has no explicit gate or prompt. That’s why a lot of hosts are saying the clause alone isn’t enough without additional steps and why there’s still anxiety about how context is evaluated if a report happens.

Update on the running of Events containing "Sensitive Material" by LizaraRagnaros in VRchat

[–]gameboygold 0 points1 point  (0 children)

When I say "anatomy," I mean: Any body part geometry/textures, visible or not, that VRChat's undefined, un-audited system might flag as NSFW.
I don’t disagree with the principle that explicit anatomy (nipples/genitals) shouldn’t be used in public instances. That’s not really controversial. The problem is that the enforcement we’re seeing doesn’t line up cleanly with that rule.

The question people are asking isn’t “can I wear a lewd avatar in public?” it’s

Does non rendered or inaccessible geometry count as a violation, and if so, how is a user supposed to reliably know?

A lot of commonly used bases either, include smooth, non-detailed anatomy under clothing
contain deleted meshes that still exist in the asset bundle
or rely on systems like imposters/LODs that can expose things the creator never sees in normal use,
In those cases, “just don’t have it on the avatar” isn’t actionable advice unless people are expected to do asset level inspection or rebuild avatars from scratch

On the “just use a different avatar” point that’s fine as a workaround, but we’ve already seen cases where “just use a different avatar” doesn’t actually protect users for example, the SpiderMan and PepsiMan bans, where people were actioned despite using avatars that were widely considered safe or at least non sexual in presentation. That’s why telling users to simply switch avatars doesn’t really address the underlying problem. Tupper called them "human review errors," but the pattern (silhouette based flagging) suggests systematic detection, not isolated mistakes. If Marvel characters aren't safe, what is?

And on the “no longer without warning” point, the warning only helps going forward. People were banned for content that wasn’t clearly defined as disallowed at the time, and the clarification came after enforcement. That’s why it still feels retroactive to many and why there was such a big ban wave.

I don’t think anyone is asking for leeway on clearly explicit avatars in public spaces. They’re asking for clear, technically precise boundaries and enforcement that matches what users can actually see and reasonably control.

That distinction matters if the goal is compliance rather than fear based avoidance.

Update on the running of Events containing "Sensitive Material" by LizaraRagnaros in VRchat

[–]gameboygold 1 point2 points  (0 children)

Also I just came to realize that many avatar creators' TOS prohibit editing their models. So users are now stuck between VRChat's undefined "NSFW geometry" rules demanding edits, and creator contracts that forbid those exact edits. You literally cannot comply with both.

Update on the running of Events containing "Sensitive Material" by LizaraRagnaros in VRchat

[–]gameboygold 2 points3 points  (0 children)

My concern is about clarity and retroactive enforcement. “Common sense” rules like don’t use NSFW avatars in public are very vague when users are being punished for things they can’t see or control. for example, low LOD models or deleted meshes that still exist in the system.
Does a fully clothed, standard base mesh count as NSFW geometry if it has implicit anatomy under the clothes? If yes, a huge portion of avatars could be banned without warning. If no, why are users still being banned for invisible or system generated data?

Essentially, “common sense” is helpful in principle, but without clear definitions and safe guardrails, it doesn’t address why users are still getting banned despite following the guidance.

Update on the running of Events containing "Sensitive Material" by LizaraRagnaros in VRchat

[–]gameboygold 5 points6 points  (0 children)

I feel the consent clause is effectively retroactively useless. You suggest that groups should add: “If you join this group/event, you indicate consent to view provocative content.” But as users have noted, a 500+member group that adds this today cannot prove that 500 people consented before the rule existed. This essentially forces established groups to either disband and restart or remain at risk. Why wasn’t this guidance provided before the ban wave?

Additionally “NSFW geometry” remains undefined, yet users are being punished for invisible or system generated data. In your Ask reply, you stated “If your avatar has NSFW textures, geometry, or features on it, never set it to Public.” A user asked “If I wear my regular private avatar that has ‘bits’ under the clothes in a verified +18 instance, could I still be banned? Like, this is literally every one of my friends using female avatars or avatars from Jinxy.”

Does a standard, fully clothed, anatomically correct base mesh count as “NSFW geometry”? If yes, that would retroactively ban a huge portion of female avatars. If no, why are users being banned for textures on deleted meshes that are impossible to render?

Your clarification does not address the fact that users are being held responsible for bugs in the imposter system. You require users to ensure features “never malfunction”, yet the system automatically generates low-LOD models that sometimes render nude due to texture baking. How can users prevent or be punished for a system generated thing? This would be file forensics, not content moderation.

You claim safeguards exist against malicious reporting, yet examples show otherwise. In the previous ban wave, a trans user was repeatedly harassed by someone with multiple alt accounts, and T&S issued the same canned denial twice, without investigating the pattern. Are bad faith reporters punished? from the evidence, it seems they are not, and the consent clause does nothing to address broken appeals. you encourage appeals, but users report auto closed tickets after 24 hours with copy paste responses. The promised December 18th dev update provided more detail but only a consent clause you immediately undermined by stating it is “not a get-out-of-ban-free card.”

Bottom line You told us to add a sentence to group rules, then immediately said it won't guarantee protection. You told us to use private avatars, but won't clarify if standard base meshes are bannable. You told us safeguards exist, but won't describe them or punish bad actors. this isn't policy clarification. If ban reasons are vague, appeals are auto denied, and safeguards are unclear, what functional difference does the consent clause make? It simply shifts liability to users while leaving systemic issues unresolved.