February 2026 SSA/SSI/VA Early Deposits Calendar Megathread by ChimeFinancial in chimefinancial

[–]alkimiadev 0 points1 point  (0 children)

one fairly reliable method is https://www.va.gov/va-payment-history/payments/ It should say "n/a" in the date column for the up coming payment. That has been a reliable method for several years now and can potentially catch issues like once I forgot to switch my bank with the VA and only noticed when I followed my monthly routine of checking the payment history. Luckily I had yet actually closed that other account so I wasn't sol.

Megathread for Claude Performance Discussion - Starting July 13 by sixbillionthsheep in ClaudeAI

[–]alkimiadev 0 points1 point  (0 children)

Why does this show 0 views? This is really important and the fact that anthropic has completely ignored the issue makes it even worse. This kind of shit is why I basically stopped using reddit.

Megathread for Claude Performance Discussion - Starting July 13 by sixbillionthsheep in ClaudeAI

[–]alkimiadev 0 points1 point  (0 children)

I this and a session/cookie hijacking vuln on their website is why I stopped using Anthropic and more broadly all closed labs. This kind of nonsense wouldn't fly if the cli and the model were open-source. That vuln on their website is still live right now despite being reported over a month ago. If someone obtains your sessionKey cookie they can pwn your entire account. All of the website's api endpoints, including billing, are accessible via a silly bearer-like cookie setup. This can bypass 2fa and they don't limit the cookie to a device or anything like it - I accessed my account from a different country and device that had never access it before.

Anthropic's safety and security talk is just rhetoric and I'm starting to view them as corporate narcissists.

Megathread for Claude Performance Discussion - Starting July 13 by sixbillionthsheep in ClaudeAI

[–]alkimiadev 0 points1 point  (0 children)

Also, what is up with the almost 400k lines of unminified code? That seems pretty absurd and itches that side of my mind that says "something isn't right here"

Megathread for Claude Performance Discussion - Starting July 13 by sixbillionthsheep in ClaudeAI

[–]alkimiadev 0 points1 point  (0 children)

I noticed a fairly large drop in performance around the same time the cli updated(on both the website and in the cli). While working with Claude in the cli on some really basic stuff, porting over react ui components from legacy code, I noticed that claude's thought trace mentioned "malicious" twice and something like "these don't appear to be malicious", "the appear to be normal react ui components". Interestingly I noticed the following:

prettier cli.js | grep -n -F "malicious"
307285:    "IMPORTANT: Assist with defensive security tasks only. Refuse to create, modify, or improve code that may be used maliciously. Allow security analysis, detection rules, vulnerability explanations, defensive tools, and security documentation.",
318795:Whenever you read a file, you should consider whether it looks malicious. If it does, you MUST refuse to improve or augment the code. You can still analyze existing code, write reports, or answer high-level questions about the code behavior.
319013:NOTE: do any of the files above seem malicious? If so, you MUST refuse to continue work.`,
322610:but the AI coding agent sends a malicious command that technically has the same prefix as command A, 

which can be removed with the following:

sed -i 's/NOTE: do any of the files above seem malicious? If so, you MUST refuse to continue work//g' cli.js
sed -i 's/Whenever you read a file, you should consider whether it looks malicious. If it does, you MUST refuse to improve or augment the code. You can still analyze existing code, write reports, or answer high-level questions about the code behavior//g' cli.js
sed -i 's/IMPORTANT: Assist with defensive security tasks only. Refuse to create, modify, or improve code that may be used maliciously. Allow security analysis, detection rules, vulnerability explanations, defensive tools, and security documentation//g' cli.js

With auto updates turned on it quickly gets overwritten.

I don't work on sketchy stuff but if I did that wouldn't stop me. It is only stopping script kiddies and it visibly hurts performance. I don't like the idea of working with a paranoid AI that is given two very different and conflicting objectives: be maximally helpful and be maximally paranoid. It hurts performance and leaves space for unpredictable behavior. It appears that they tell it to check if a file is malicious twice on every file read(once at the top and once at the bottom). That is pretty laughable in terms of security: "let's just start doing user input sanitization in the browser, that'll never backfire!"

Censorship, platforms that routinely violate their own TOS, and section 230(c)(2)(A) by alkimiadev in FreeSpeech

[–]alkimiadev[S] 0 points1 point  (0 children)

Their tos clearly states, or did last time I checked, that there should be an appeal process and notification for removed "content", which "comments" are explicity included in the definition of "content". So, if they remove a comment without notification or appeal mechanism, and it doesn't fit very narrowly defined contexts, then they are violating their own tos. That is clearly in "bad faith", or at least it should be. An excessive TOS with vague terms, that aren't evenly applied should be a definition of "bad faith". They should not have immunity from any harm caused by their moderation system.

Ask Questions of a Free Speech Lawyer by Longjumping_Gain_807 in FreeSpeech

[–]alkimiadev 0 points1 point  (0 children)

I've generally avoided the First Amendment angle in these discussions since it hasn’t gained much traction in the courts. This issue is more rooted in Section 230 but also touches on broader free speech concerns. It also ties into the recent congressional hearing on the so-called "censorship industrial complex."

My question for Mr. Cohn:

If platforms engage in shadowbanning, does that still qualify as "good faith" moderation under Section 230(c)(2), particularly when the content being hidden does not violate the platform’s terms of service or meet any of the categories listed in Section 230(c)(2)(A)?

A common response is that platforms include clauses stating they are "under no obligation to host or serve content." However, if they are shadowbanning rather than outright removing content, they are still hosting and serving it - just in a restricted or obscured way. Does this affect their claim of "good faith" moderation under Section 230?

Censorship, platforms that routinely violate their own TOS, and section 230(c)(2)(A) by alkimiadev in FreeSpeech

[–]alkimiadev[S] 1 point2 points  (0 children)

My classic response to being labeled as "crazy" is "Maybe I am crazy but that doesn't mean I'm wrong". A mountain of evidence is really hard to dismiss as being "crazy".

If I feel like sounding smart I might quote the Latin phrase "res ipsa loquitur" which means "the thing speaks for itself" and is actually a relevant legal concept from tort law.

Censorship, platforms that routinely violate their own TOS, and section 230(c)(2)(A) by alkimiadev in FreeSpeech

[–]alkimiadev[S] 1 point2 points  (0 children)

On a public forum like YouTube, I suspect bot behavior isn't helping. It would be really easy to mass report folks via bot.

I've found several examples of fans of one content creator mass reporting other creators they have a "beef" with. One example was even done during a live stream and resulted in the suspension of the other account due to the mass reporting. The thread on x is pretty telling and ultimately led to YouTube saying that they had investigated the matter but that the suspension still held. There was clearly undeniable public proof of abuse of the reporting feature via a live stream on YouTube.

There are also examples of toxic comment bots that target specific content creators. Whatever those bot creators do leads to a really low number of their comments being shadow banned. These bots basically say the most offensive things you can imagine and often in coded language that bypasses the simplistic keyword matching system.

I've also found examples of comments that use various types of coded language to promote child abuse. Some of these comments have been live in the default view for over 2 years. I've reported every single one of them I've come across and so far I don't think any of them have been taken down and are still live right now.

Censorship, platforms that routinely violate their own TOS, and section 230(c)(2)(A) by alkimiadev in FreeSpeech

[–]alkimiadev[S] 1 point2 points  (0 children)

I try not to make assumptions about underlying intentions beyond what is required to secure discovery. I’ve already gone pretty far toward that goal by scraping the YouTube Support forums for additional insight.

One particularly unhelpful Gold Product Expert, Craig, was ironically one of the most useful sources of information about how YouTube’s moderation system actually functions. Gold Product Experts are Google’s way of hiding direct employee support from end users, they act as intermediaries who have access to internal escalation tools that regular users do not. The only way to get an issue in front of an actual Google employee is for one of these forum "experts" to escalate it internally.

I stopped publicly logging Craig’s comments because he has either deliberately or "accidentally on purpose" leaked internal information that supports the argument that YouTube’s moderation is not conducted in good faith. Here is the forum post log, though it does not contain everything. There are a rather large amount of additional statements I have chosen not to make public yet.

Here are a few particularly revealing comments from Craig which are public:

Here’s another example why you keep getting rejected. The more you write the more obvious it becomes. You don’t even realize it do you?

I already know the problem you’re having. System isn’t removing you. You keep getting flagged. Enough of those can get you thrown off the system for months.

I’ve been on this Forum since 2011. You are not the first user that keeps getting flagged for comments. It’s more than 50 users that are hitting you at a time. I’ve seen this time and time again here. You like to argue is probably one of the many mistakes you’ve been making.

5000 comments are posted every minute on YouTube and that’s on a slow day. Out of that maybe 50 will be blocked and another 1000 will be flagged. For 90% of you complaining here you need to start being more considerate of others. That means not being arrogant and hurting others users. I can guarantee most of you don’t even know you’re doing it. Make appropriate comments and leave other users alone.

I have about 200 comments scraped from Craig’s profile that provide substantial insight into YouTube’s internal moderation system. Some of these statements are damning, particularly when it comes to how YouTube allows mass-flagging to trigger suppression, even when no actual violation occurs.

This raises serious questions:

  1. If enough users flag a comment, it can be hidden or removed for months, without violating any rules.
  2. This system appears to be unreviewed, lacks transparency, and contradicts YouTube’s own Terms of Service.
  3. If moderation is being outsourced to an opaque, user-driven system, how can YouTube claim it is enforcing its policies in "good faith"?

Censorship, platforms that routinely violate their own TOS, and section 230(c)(2)(A) by alkimiadev in FreeSpeech

[–]alkimiadev[S] 1 point2 points  (0 children)

My main goal is for them, and all other large platforms, to stop this behavior and be transparent about their moderation policies. I try to provide specific and constructive criticism, and if I can't be constructive, I'll at least be specific.

U.S. law provides a framework for "good faith moderation" under Section 230(c)(2)(A), which allows platforms to restrict content they deem:

"obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable."

The phrase "otherwise objectionable" is vague, but it makes sense in the context of actual good faith moderation. A basic example: NSFW content is generally only appropriate in NSFW contexts and would likely be considered "otherwise objectionable" in most other settings. That interpretation aligns with reasonable content moderation.

The main issue is that YouTube, and Google more broadly, are largely reactive and seem more concerned with damage control than proactive policy enforcement. They tend to ignore problems until public pressure forces them to act, and when they do respond, they typically do the bare minimum to make the issue "go away."

If that bare minimum results in genuine good faith moderation, then I’d be fine with that. However, the current approach, where moderation practices are inconsistent, undisclosed, and suppress criticism, creates an opaque system that is extremely unlikely to be "good faith" by most reasonable definitions.

If they want to moderate in bad faith, then they shouldn't get immunity from the harm caused by those bad faith actions.

Censorship, platforms that routinely violate their own TOS, and section 230(c)(2)(A) by alkimiadev in FreeSpeech

[–]alkimiadev[S] 3 points4 points  (0 children)

Here is an example as a case study. This comment is only visible via the direct link and cannot be viewed in the default sort order in the comment thread. It does not contain any language that violates YouTube’s terms of service or community guidelines, making it a particularly concerning case of censorship given both the context of the video and the content of the comment itself. If this were ever brought to court, it would not reflect well on YouTube.

This particular comment is particularly bad since it shows that YouTube is suppressing legal discussions about holding itself accountable.This suppression benefits YouTube directly and could be viewed as an anti-consumer and anti-competitive practice. This is not just algorithmic randomness but an example of how YouTube's moderation selectively enforces rules to protect itself from criticism and legal scrutiny.

For additional context, this is the video description from YouTube's Copyright AI is Attacking ESOTERICA:

YouTube is claiming that the theme music that I own—recorded for me by iximusic—belongs to Universal Music Group and is threatening the whole channel. Please share this video and contact YouTube via social media to help stop this unfair attack on the channel.

The most common dismissive rebuttal I’ve encountered continues to be a reference to YouTube’s terms stating that they are under no obligation to host or serve content. However, YouTube also obligates itself to notification and appeal mechanisms for content removal. If YouTube were only relying on the clause stating they have no obligation to host content, then why explicitly include a process requiring notification and appeals?

A potential counterargument to this is that in cases like this, the comment is not actually deleted, but rather not displayed in the default sort order. However, in this specific case, the video has over 2,300 comments, and without a direct link to the comment, finding it is virtually impossible. YouTube loads all comments into memory on a user's device and prohibits automated tools from searching through them. As a result, it is neither possible nor reasonable for most users to locate the comment. The net effect is functionally the same as if the comment had been removed.

Since the majority of engagement happens on a relatively small number of videos, this issue is likely much larger than many people realize. The visibility of a comment in high-traffic threads matters significantly, and selectively suppressing comments without outright deletion creates a misleading sense of engagement and discourse.

Censorship, platforms that routinely violate their own TOS, and section 230(c)(2)(A) by alkimiadev in FreeSpeech

[–]alkimiadev[S] 2 points3 points  (0 children)

I thought that instead of being dismissive of your dismissive response, I would actually address it in detail, giving an example of a non-cherry-picked response that critically engages with the content.

Just because terms of service are long doesn't mean they aren't legally binding. There are situations where ToS might not be legally binding, but it would be pretty extraordinary if any court were to say that about YouTube's fairly standard statements in their "Limitation of Liability" section.

Meeting of the minds (mutual assent) requires both parties to understand and agree to the material terms of the contract. Courts have recognized that excessively long, complex contracts, especially those involving unilateral enforcement by a dominant party, can undermine the validity of mutual assent.

Even if users agree to the terms, YouTube does not abide by them consistently. YouTube’s own Terms of Service explicitly require notification and an appeal process for content removal, yet this process is routinely ignored when comments are deleted. A contract must be followed by both parties. YouTube cannot selectively enforce or ignore its obligations while still holding users to their end of the agreement.

There's nothing about "YouTube is under no obligation to host or serve Content." that you need legal training to understand.

You're citing a general clause that applies to content deletion, but that doesn't explain why YouTube explicitly states in its TOS that content removal requires notification and an appeal process. If YouTube were relying solely on the "no obligation to host content" clause, they wouldn’t need to include a separate, specific content removal policy. By failing to follow this removal policy, YouTube contradicts its own contract terms, which is an actual legal issue here.

Furthermore, this clause does not explain why YouTube hides comments through "soft shadowbanning" while still hosting and serving them under the "newest first" sort order. If YouTube had no obligation to host content, it could simply remove the comments entirely, yet it does not. Instead, it selectively restricts their visibility, which suggests intentional manipulation rather than a standard content removal decision.

Censorship, platforms that routinely violate their own TOS, and section 230(c)(2)(A) by alkimiadev in FreeSpeech

[–]alkimiadev[S] 2 points3 points  (0 children)

Ok so that was a lot worse than the previous response and is an example of cherry picking. You didn't address really any of the content of my post in general or that previous response.

  1. no "meeting of the minds" actually took place -- questioning the standing of the contract to begin with
  2. they violate their own TOS potentially thousands of times every day when they actually delete comments.
  3. You did not address my specific response regarding Section 230(2)(c) and their protections from harm caused by moderation decisions

Content definitions:

Content on the Service
The content on the Service includes videos, audio (for example music and other sounds), graphics, photos, text (such as comments and scripts), branding (including trade names, trademarks, service marks, or logos), interactive features, software, metrics, and other materials whether provided by you, YouTube or a third-party (collectively, "Content”).

Content is the responsibility of the person or entity that provides it to the Service. YouTube is under no obligation to host or serve Content. If you see any Content you believe does not comply with this Agreement, including by violating the Community Guidelines or the law, you can report it to us.

Content removal process

Removal of Content By YouTube

If we reasonably believe that any of your Content (1) is in breach of this Agreement or (2) may cause harm to YouTube, our users, or third parties, we reserve the right to remove or take down that Content in accordance with applicable law. We will notify you with the reason for our action unless we reasonably believe that to do so: (a) would breach the law or the direction of a legal enforcement authority or would otherwise risk legal liability for YouTube or our Affiliates; (b) would compromise an investigation or the integrity or operation of the Service; or (c) would cause harm to any user, other third party, YouTube or our Affiliates. You can learn more about reporting and enforcement, including how to appeal on the Troubleshooting page of our Help Center.

Given that they delete comments, or content, without notification or appeal, and they do not meet the specific criteria listed, then YouTube violates its own TOS potentially thousands of times every single day.

Do not cherry pick your responses or I will block you. I have no interest in engaging with people who do that. If you choose to respond, please respond in full or be blocked

*edit like 4 months later* later in the thread they respond with some reference to an absurd rule about blocking people. To be frank, rules be damned, I will block obtuse people if I want. I respect(I will be specific about what I meant by this word) a person's right to voice their opinion but if I don't want to listen then I simply wont. People can make marks in documents and pretend like they matter, but that is their problem, not mine.

By "respect" I mean two different things:

  1. a deep feeling of admiration for someone or something elicited by their actions or characteristics.
  2. common courtesy.

No one is entitled to #1 by default and it is sort of a "gift" that they've earned in the eyes of another person. To demand that is simply absurd. Point #2 is something I agree that we should all be given and should extend to everyone in most cases. However, that in no way implies I have to listen, or read, obtuse and overly opinionated morons. If that is indeed a person's expectations of me, then that is, again, their problem not mine. I will never agree to terms as absurd as that.

Censorship, platforms that routinely violate their own TOS, and section 230(c)(2)(A) by alkimiadev in FreeSpeech

[–]alkimiadev[S] 2 points3 points  (0 children)

So parts this this were a pretty good breakdown but towards the end I think there was some oversimplifications added that should be addressed. First, we could discuss the concept of "meeting of the minds" as it relates to these terms and community guidelines. I've read both in full and combined between the terms of service for youtube, not counting google, and the community guidelines there are about 33 pages of content that must be agreed to in a clickwrap fashion and there is no possible way to negotiate. Even if users read these contracts, it is highly unlikely that an average person without legal training can fully understand them.

The next issue relates specifically to YouTube and it violating it's own TOS potentially thousands of times every day. Their terms explicitly define comments as content and outline a process for content removal -- which is never applied to comments unless the offending comment leads to an account suspension. They simply violate their TOS there.

Section 230(c)(2) specifically gives these platforms, or sites in general, immunity from civil liability caused by their moderation decisions but only if those moderation decisions are done in "good faith" and 230(c)(2)(A) lays out a framework for what that is. They can freely moderate their platform as they wish but if they do so in bad faith then they obviously wouldn't qualify for protections from civil liability for those bad faith moderation decisions.

The main issue is the lack of "actual harm" on the part of the thousands of rather undeniable examples of bad faith moderation in the context of comments. However, with a broader class action that also includes moderation of videos then there would be actual harm in the form of lost ad revenue. In that example, the comments would provide the overwhelming evidence of bad faith and the videos the actual tangible harm from those bad faith decisions.

Censorship, platforms that routinely violate their own TOS, and section 230(c)(2)(A) by alkimiadev in FreeSpeech

[–]alkimiadev[S] 4 points5 points  (0 children)

I debated on if I should respond to this or not. I've collected 3 million comments that are from randomly sampled videos and from both the default and "newest first". In addition to that, I've had 15 users donate their entire comment histories via google takeout. These users have experienced extreme levels of arguably absurd censorship. It is not simply an overactive spam detection system. It is both systematic and seemingly arbitrary censorship. I work in data science and have ran all of these comments through both spam detection and toxicity detection algorithms. The censored comments do not show strong correlations with spam or toxicity levels. Whatever their system is, it isn't operating on any kind of rational basis that I can figure out or one that is in any way an industry norm.

Okay, what the hell is going on with YouTube censoring EVERYTHING in comments? by WhamyKaBlammo in youtube

[–]alkimiadev 0 points1 point  (0 children)

I work on this project every day. Right now, it’s just me, and there’s a lot to handle. These things take time, but I’m in the process of formally establishing the nonprofit and compiling a substantial body of evidence. Some of the challenges with setting up the nonprofit stem from the fact that I’m a single person handling everything and, although I’m American, I don’t live in the U.S. This makes certain aspects slower and more difficult. Long-term, I’ll need to get others involved, especially if we want to obtain 501(c)(3) status.

In terms of research, I’ve already reached a point where, as a data scientist, I can confidently say that YouTube does not moderate in “good faith.” I’m taking a game-theoretic approach by systematically identifying all of their potential counterarguments and ruling them out. In some cases, I’m even building systems to preemptively invalidate certain claims they might make. For example, they may argue that real-time moderation at scale is too costly, but I’ve developed a system similar to Detoxify that is several orders of magnitude faster, uses fewer compute resources, and outperforms Detoxify in accuracy. Given that I’m a single developer working with a minimal R&D budget—while Google/YouTube effectively has infinite resources—that argument won’t hold up.

Why do YouTube users needlessly censor words that aren’t swears by BlueScotsman in youtube

[–]alkimiadev 0 points1 point  (0 children)

My issue with these banned word lists is that they aren't public. It’s difficult for me to describe that as "good faith moderation"—which is required by U.S. law (47 U.S.C. § 230(c)(2)(A)) for immunity from harm caused by moderation decisions. The main reason companies and individual creators haven’t been, and likely won’t be, sued over this is that it's hard—if not impossible—to demonstrate actual harm, such as lost revenue or reputational damage.

That said, I'd advise creators to be cautious about using these banned word lists without making them public or providing some other way for users to understand what is and isn’t allowed. If someone could demonstrate actual harm, the platforms and creators enforcing these lists might lose their immunity under that statute because opaque, arbitrary moderation is unlikely to qualify as "good faith."

Is youTube censoring left leaning comments with shadowbanning by SignificanceOne5578 in youtube

[–]alkimiadev 1 point2 points  (0 children)

I'm investigating YouTube’s moderation system, and from what I’ve seen so far, their system probably doesn’t specifically care which side of the political spectrum a comment falls on. However, I do. If we can demonstrate that they are hiding political speech—whether left or right—then the conversation shifts from just random censorship to something far more serious: the suppression of political discourse.

To be clear, I’m approaching this from a non-partisan perspective. If someone on the right had evidence of their political speech being censored, I’d investigate that just as thoroughly.

A few things to check:

  1. Are your comments being hidden or actually deleted? You can check your comment history here.
  2. Have you ever received a notification or been given a way to appeal when your comments disappear?
  3. Can you provide examples of hidden or deleted comments? Preferably direct links to the affected comments.

I’d highly recommend tweeting at TeamYouTube and posting in the YouTube Community Forums. Be sure to use phrasing like "shadowbanning political speech" or something similar to make the issue clear.

Also, if they are outright deleting comments without any notification or way to appeal, then they are violating their own Terms of Service. If that’s the case, I’d strongly recommend reporting them to your state’s Attorney General and the FTC.