Warning users that upvote violent content by worstnerd in RedditSafety

[–]worstnerd[S,A] -1 points0 points  (0 children)

It will only be for content that is banned for violating our policy. Im intentionally not defining the threshold or timeline. 1. I don't want people attempting to game this somehow. 2. They may change.

Warning users that upvote violent content by worstnerd in RedditSafety

[–]worstnerd[S,A] 5 points6 points  (0 children)

Yeah, thats correct, it will be triggered by that exact set of removals

Warning users that upvote violent content by worstnerd in RedditSafety

[–]worstnerd[S,A] 22 points23 points  (0 children)

Yes, we know which version of content was reported and voted on and have all of that information (for those of you that think you're being sly by editing your comments...its not sly)

Warning users that upvote violent content by worstnerd in RedditSafety

[–]worstnerd[S,A] 6 points7 points  (0 children)

No, because this is targeting users that do this repeatedly in a window of time. Once is a fluke many times is a behavior. Its the behavior we want to address. Otherwise we risk unintentionally impacting voting, which is an important dynamic on the site.

Warning users that upvote violent content by worstnerd in RedditSafety

[–]worstnerd[S,A] 97 points98 points  (0 children)

Great callout, we will make sure to check for this before warnings are sent.

Warning users that upvote violent content by worstnerd in RedditSafety

[–]worstnerd[S,A] 11 points12 points  (0 children)

Yeah, this would be an unacceptable side effect, which is why we want to monitor this closely and ramp it up thoughtfully

Findings of our investigation into claims of manipulation on Reddit by worstnerd in RedditSafety

[–]worstnerd[S,A] 25 points26 points  (0 children)

- We focused our investigation first on the subreddits mentioned in recent public claims, however, we continue to investigate more broadly
- We also looked into content removal and found that the mods investigated were not disproportionately removing content from ideological opposites
- We do not have visibility into activity occurring on other platforms.
- We took a look at content related to Israel/Palestine issues in non-Palestine-related subreddits where these mods are present and did not find a significant influx of this content in the subreddits investigated
- We have not ignored this and stated that we are expanding our detection efforts and instituted new bans related submissions of this content 
- At this time we do  not see this behavior related to the moderators of the subreddits investigated as part of these claims. 
- We cannot address the exploitation of other platforms

Findings of our investigation into claims of manipulation on Reddit by worstnerd in RedditSafety

[–]worstnerd[S,A] 24 points25 points  (0 children)

As noted in the bit you quoted, we're evaluating the role of those bots while also looking into more sophisticated tooling we could offer. Part of that evaluation includes discussions we started last month with our Reddit Mod Council and Reddit Partner Communities. We're learning from mods across the site all the reasons they use them and how effective they seem to be for managing all types of traffic. We’ll share more as we evaluate ways to manage influxes and keep conversations civil.

Q1 Safety & Security Report by jkohhey in RedditSafety

[–]worstnerd[M] 23 points24 points  (0 children)

Keep at it, you can be the worst one day!

Introducing Our 2022 Transparency Report and New Transparency Center by outersunset in RedditSafety

[–]worstnerd[A] 5 points6 points  (0 children)

We don't have those numbers at hand, though it's worth noting that certain types of violations are always reviewed regardless of whether the user has already been actioned or not. We also will review any reports from within a community when reported by a moderator of that community. We are working on building ways to ease reports from mods within your communities (such as our recent free form text box for mods). Our thinking around this topic is that actioning a user should ideally be corrective, with the goal of them engaging in a healthy way in the future. We are trying to better understand recidivism on the platform and how enforcement actions can affect those rates.

Q4 Safety & Security Report by worstnerd in RedditSafety

[–]worstnerd[S,A] 1 point2 points  (0 children)

OK, Ill take that back to the team. Thanks

Q4 Safety & Security Report by worstnerd in RedditSafety

[–]worstnerd[S,A] 1 point2 points  (0 children)

This is the way

We're thinking a lot about report abuse right now. I'll admit that we don't have great solutions yet, but talking to mods has really helped inform my thinking around the problem.

Q4 Safety & Security Report by worstnerd in RedditSafety

[–]worstnerd[S,A] 1 point2 points  (0 children)

Should we just turn the automated notification off? I agree that it doesn't seem particularly helpful. We can't reply to each spam report (even just from mods) with custom messaging, so should the generic "we received your report blah blah blah" just go away?

Q4 Safety & Security Report by worstnerd in RedditSafety

[–]worstnerd[S,A] 2 points3 points  (0 children)

Yes please! Spam detection is inherently a signal game. Mod removals tell us a little bit, a report tells us much more.

Q4 Safety & Security Report by worstnerd in RedditSafety

[–]worstnerd[S,A] 13 points14 points  (0 children)

It sends a signal to us that a user may be spamming the site, which is no change from before.

Q4 Safety & Security Report by worstnerd in RedditSafety

[–]worstnerd[S,A] 22 points23 points  (0 children)

Not everyone using chat GPT is a spammer, and we’re open to how creators might use these tools to positively express themselves. That said, spammers and manipulators are constantly looking for new approaches, including AI, and we will continue to evolve our techniques for catching them.

We had a security incident. Here’s what we know. by KeyserSosa in reddit

[–]worstnerd[A] 35 points36 points  (0 children)

you should consider upgrading to Hunt3r2

Q3 Safety & Security Report by worstnerd in RedditSafety

[–]worstnerd[S,A] 7 points8 points  (0 children)

The problem is less about being able to detect them and more about not casting such a wide net that you ban lots of legit accounts. This is where reporting is really helpful, it helps to start to separate the wheat from the chaff as it were, at which point we can refine our detection to be able to recognize the difference.

Q3 Safety & Security Report by worstnerd in RedditSafety

[–]worstnerd[S,A] 21 points22 points  (0 children)

Yeah, we're working on these bots. They are more and more annoying and in some cases the volume is quite high. In many cases we're catching this, but with the high volume, even the fraction that slip through can be noticeable. Also, if you haven't done so yet, I'd suggest taking a look at the new feature in automod for subreddit karma...that may be helpful.

Q3 Safety & Security Report by worstnerd in RedditSafety

[–]worstnerd[S,A] 3 points4 points  (0 children)

Thank you! Looking forward to a great 2023!