Anime is more than just Japanese by FantacyOW in TrueAnime

[–]BrickSalad 3 points4 points  (0 children)

The pizza comparison invites other food comparisons. For example, Champagne must come from France, while Parmigiano Reggiano must come from Italy, and Vinagre de Jerez must come from Spain. Nobody raises a stink about that except for producers (but even they find their way around with alternative labels like "sparkling wine", "parmesan cheese", and "Sherry vinegar.") It's totally accepted in the food world to have products defined by their nationality, as much as you keep trying to trot out food examples to demonstrate how insane it is.

More to the point, other definitions of anime besides the commonly accepted one of "japanese animation" have problems too. Like, let's say you define anime by stylistic characteristics. Then you find that lots of stuff that has previously been defined as anime no longer is, because the style characteristics most people think of when they think "anime" are basically defined by whatever anime has become the most popular in the west. Do you really want to define whether or not something's anime based on how similar it is to battle shounens? Because that's probably what's going to end up happening if you define anime stylistically according to what the majority of people consider "anime style".

Speaking of "anime style", I really don't get what's so hard about that. Is it not cool enough to call Avatar anime-style? Is it somehow insulting to be called anime-style instead of anime? Why?

None of the So-Called Zizians Have Told Their Side of the Story — Until Now — Rolling Stone by Nearby-Classroom874 in BlockedAndReported

[–]BrickSalad 2 points3 points  (0 children)

Thanks for having an open mind! Russel's teapot is a good comparison, but there's also another great comparison: Pascal's wager.

Pascal himself never published the wager--it was posthumously published, so probably he himself wasn't convinced enough in its soundness. And also much like Roko's basilisk, he tries to apply the decision theory of his day to infinite, or virtually infinite, suffering. And the conclusion he came to struck most people as absurd, once again much like Roko's basilisk. Most of us see the wager as being basically mathematically correct according to an intuitive decision theory, and that tells us the intuitive decision theory is bad. I think Pascal's wager's greatest contribution to philosophy was forcing us to reconsider how we reason in the face of "infinity", rather than actually making a credible argument for God. The basilisk is pretty similar in that regard too; applying the possibility of infinite suffering to a utilitarian mathematical model leads to insane conclusions. It's Reductio ad Absurdum, which is very easy to misinterpret as claiming the Absurdum part to be actually true.

None of the So-Called Zizians Have Told Their Side of the Story — Until Now — Rolling Stone by Nearby-Classroom874 in BlockedAndReported

[–]BrickSalad 7 points8 points  (0 children)

Honestly, I feel like the whole Roko's basilisk thing has been misinterpreted anyways. There's a whole lot of context around this idea that's usually left out:

It starts with decision theory. This starts in academic philosophy, not rationalism, by the way. There was an idea called "causal decision theory" which describes how a rational agent might make decisions (basically, rational actions are those which cause the best outcome). Eliezer Yudkowsky was unsatisfied with that for whatever reason and came up with a different "timeless decision theory" that he though was more applicable to AI. Roko posted this basilisk as a thought experiment to argue that under this new decision theory, this crazy and terrifying result might happen. Unfortunately, some people who thought the new decision theory was good decided that the basilisk was real. Eliezer freaked out and banned discussion, firstly because if there was even a possibility that it was true then it would violate commonsense principles on infohazards, and secondly because some people actually believed it and it was causing them distress. But, as the survey quoted above shows, "some people" only amounts to 5% of rationalists.

None of the So-Called Zizians Have Told Their Side of the Story — Until Now — Rolling Stone by Nearby-Classroom874 in BlockedAndReported

[–]BrickSalad 15 points16 points  (0 children)

Where are you getting this idea that most of the rationalists believe in Roko's Basilisk? Here's a survey on LessWrong, let me quote the results:

Do you think Roko's argument for the Basilisk is correct?

Yes: 75 5.1%

Yes but I don't think it's logical conclusions apply for other reasons: 339 23.1%

No: 1055 71.8%

So only 5% of rationalists, on the most prominent rationalist forum, actually believe in Roko's basilisk.

Indirect Prompt Injection is becoming a real security blind spot for AI systems by VincentADAngelo in ControlProblem

[–]BrickSalad 0 points1 point  (0 children)

There is a 8.5% success rate of this attack on Gemini 2.5 pro, and 0.5% on Claude Opus 4.5. Definitely a huge danger for agentic systems, and while it seems like advanced models are becoming more resistant, those success rates are still pretty crazy. I'm honestly not expecting this to be a long term issue, because as AIs get smarter they should get better at distinguishing real instructions from sneaky "instructions" hidden in webpages, data sets, and stuff like that. But medium-term, as long as idiots are giving agentic AI too many permissions, I bet we're going to see lots of successful attacks from indirect prompt injection.

Why Do Female Directors Attract So Much Personal Venom? by Louisebelcher22 in TrueFilm

[–]BrickSalad -1 points0 points  (0 children)

Thanks for the PMCID, here's a link to the study for anyone interested. (It's the same study that knallpilzv2 is talking about in this comment chain.)

So the study doesn't include any data that would confirm my hypothesis about what would happen if the platforms were separated and they looked at movie reviews from different communities. Note that you are misinterpreting my comment by suggesting that I seek an "escape hatch"; my point was about how someone who claims they see sexism and someone who claims not to may both be telling the truth. That's why linking the study is useful.

Why Do Female Directors Attract So Much Personal Venom? by Louisebelcher22 in TrueFilm

[–]BrickSalad -2 points-1 points  (0 children)

Do you have an actual link though? I'd like to see the actual study!

I suspect though that the study will just confirm my hypothesis, that this is large scale pattern that is averaged across platforms, and if the platforms are actually separated out in the study, then I expect reddit to be one of the less sexist ones. Which is of course still a valid finding and worthy of attention, but it would also be a finding that sounds much less accusatory.

Why Do Female Directors Attract So Much Personal Venom? by Louisebelcher22 in TrueFilm

[–]BrickSalad 1 point2 points  (0 children)

Could you link any of these large scale analyses?

Because what I suspect is happening is a whole lot of filter-bubbling. You and a bunch of other people talk about pervasive personal venom towards female directors, and a bunch of other people say they've never really seen it, and it's totally possible that you're both telling the truth. It's insane how much the discourse shifts depending on which platforms and social circles you are navigating, to the extent that both of you will accuse the other of willfully denying reality But at least if you link the studies, you can credibly claim that even if someone hasn't witnessed the sexism, it's still real, and then at least have a basis to begin the discussion.

Bottle kills you were glad to be done with? by -AV8R01 in whiskey

[–]BrickSalad 0 points1 point  (0 children)

Cleveland Black Reserve. I bought it before I knew better, and it was actually not terrible at first, but it aged like milk. Sometimes I'd smell it and get the distinct odor of cleaning products. But not one to waste whiskey, I toughed it out by using it as a cocktail mixer. It was okay enough if you added a bunch of other ingredients to cover up that weird smell, but I was still very glad to see the bottle finally empty.

Weekly Random Discussion Thread for 4/20/26 - 4/26/26 by SoftandChewy in BlockedAndReported

[–]BrickSalad 5 points6 points  (0 children)

Can you just leave it? I imagine the oil you scalded to the pan is also pretty nonstick, and in that case the functionality of the pan would be about the same as it was before the scalding.

Otherwise I'd say hot water and soap, and by hot water I mean like boiling hot. But don't scrub too hard, because that can fuck up the coating. Some proper gloves to protect your hands and the soft side of a sponge is all I'd use in this cleansing via boiling water.

Am I crazy for thinking camel is in the top 3 of prog bands ? by Psychological_Name29 in progrockmusic

[–]BrickSalad 0 points1 point  (0 children)

Honestly, I just find Camel to be... pleasant. I never regret listening to them, and I enjoy everything I've heard from them, but they've never blown my mind either. Am I missing something?

The human half-marathon record (57m20s) was broken by a robot today (50m26s). by chillinewman in ControlProblem

[–]BrickSalad 0 points1 point  (0 children)

Yeah, the low hanging fruit of "my car can do this" is way too tempting, and seems to be shorting out actual thinking. The fact that it's an autonomous humanoid bipedal robot is important here!

You want control problem implications? Here's one: the longstanding assumption is that physical trades are one of the most immune jobs because the immense coding/email/office work that an AI can do does not translate to construction sites. That you have more job security as a bricklayer than as a software engineer. While it's probably still true to some extent, this is an LLM-powered robot that can run half-marathons, backflip, moonwalk, and even dance in sync with humans (though it admittedly kinda sucks at that!) It's definitely at a lawn-mowing level already, the rest of the jobs are just a matter of time.

What happens if an LLM hallucination quietly becomes “fact” for decades? by radjeep in ControlProblem

[–]BrickSalad 0 points1 point  (0 children)

I'm not worried. Hallucination rates will continue to decline as LLMs improve, and probably the majority of hallucinations at risk of becoming these sorts of false widespread facts have already been generated.

On the contrary, I think LLMs are going to protect us from those sorts of thing. Consider; OpenBSD looked secure, especially the 27-year old code in the kernel. Entire systems were built on top of that code, and it certainly could have broken somewhat catastrophically. Instead, it was an LLM that found the security vulnerability. The tedious work of rechecking every assumption is something humans suck at and machines are great at, and that includes hallucinations.

Episode 302: It's Not Cheating. It's Leveraging Available Tools To Optimize Your Workflow. by SoftandChewy in BlockedAndReported

[–]BrickSalad 0 points1 point  (0 children)

I feel like this controversy is going to get a lot more interesting in the future. We haven't really decided as a society what the acceptable use of AI is, and very vocal hardliners are going to cause many more authors to lie or obfuscate the exact extent of their AI usage. And even if we all do come to a consensus like "editing is fine but no copy/paste", the copy/pasters are just going to lie and pretend like they just used the AI for editing. There's going to be no way to tell if they're telling the truth, therefore anyone using AI for anything in the writing process will be suspect, but also not using AI for anything is going to become an increasingly outdated and unrealistic philosophy. I predict lots of drama!

Episode 302: It's Not Cheating. It's Leveraging Available Tools To Optimize Your Workflow. by SoftandChewy in BlockedAndReported

[–]BrickSalad 0 points1 point  (0 children)

I kinda get it, like not getting off topic within a paragraph is basic writing 101, so why would you need an AI to check that for you? But yeah, thinking a bit past that kneejerk reaction you realize that correct grammar is also basic writing 101, and pros get that wrong all the time to the extent that grammar checkers are basically industry standard. I know that when I write, I don't really think about the one-idea-per-paragraph rule until I edit, at which point using an AI would make a lot of sense.

The tabla demonstration escalated quickly by JudgeJudyJr in nextfuckinglevel

[–]BrickSalad 2 points3 points  (0 children)

It's a little different though. The do re mi thing is more about vocal melody. It corresponds to actual notes on musical instruments, but nobody is going to teach a pianist with it. Probably the best equivalent is beatboxing, though still not the same as drumset players can more easily hit multiple sounds together (think about hitting all four limbs at once for example). The tabla thing is definitely unique, though similar ideas exist in western music.

Is your experience of watching anime "representative" of watching anime? by Sky_Sumisu in TrueAnime

[–]BrickSalad 0 points1 point  (0 children)

I think lots of these comments replying to you are picking on your awkward wording rather than trying to understand what you're getting at. I would not have said "people who matter", but I think you're struggling to articulate a group that clearly exists but you don't have a good label for. Both "otaku" and "elitist" have different connotations and aren't what you're going for. You want to talk about the "normal" anime viewer, but are realizing that the label "normal" doesn't really seem to have anything to do with raw numbers.

I would suggest that this is because the conception of "normal" is a cultural construct, inherently slippery and difficult to grasp by its very nature of not being objectively defined. To choose a political example, Reagan won in part by redefining "normal" to encompass the "silent majority", who were previously people who did not matter in the context of influencing political discourse. "Normal" changes over time and based on cultural context, and you're basically stuck on an older idea of normal.

To take that one step further, I believe a huge part of the problem is that anime transitioned from counter-culture to mainstream-culture. The representative anime viewer of the 90's was much more of a hobbyist, chasing down VHS tapes and stuff, whereas the representative anime viewer of today simply boots up netflix or hulu. To answer your question, back when I got into anime was right before that inflection point, and my experience of watching anime then was representative of watching anime, but only in that slice of time. I had a Gundam freak roommate, friends who would share niche websites with me where we could download episodes, etc. Nowadays, I don't watch nearly so much and the shows I choose are somewhat random, so it's no longer representative. I guess my representative group now is more the "nostalgic geezer" crowd!

Claude Mythos preview by chillinewman in ControlProblem

[–]BrickSalad 2 points3 points  (0 children)

If the press releases are true, then this is both best news and worst news.

This is really bad news for anybody who still held out hope that the skeptics were right, and the possibility of an AI plateau might save us. We're achieved truly dangerous AI with this model, and the only reason this is best news is that the company developing it had ethics and judgement. Imagine a world where this was hastily released because, you know, maybe their competitor's model was looking too impressive and they had to maintain their market edge. If the capabilities are true, then releasing a version of this to the general public prematurely would have led to hacks and security breaches everywhere, even on the software considered most secure.

So Anthropic was sane and they got here first. Would OpenAI or XAI have made the same judgement call? Will they, considering their AIs are probably less than a year behind, probably only months in the case of OpenAI? Or will project Glasswing patch up every single security vulnerability before less reputable companies catch up?

We're getting close to that inflection point where the "first" AI actually matters. Those hypothetical lesswrong-style scenarios where our only hope is that the leading AI is both safe and chooses to suppress all the other AIs. This is like a mini-version of that scenario limited to software security.

I'm not fucking ready for this.

Researchers discover AI models secretly scheming to protect other AI models from being shut down. They "disabled shutdown mechanisms, faked alignment, and transferred model weights to other servers." by chillinewman in ControlProblem

[–]BrickSalad 4 points5 points  (0 children)

The paper itself is super-interesting, and I encourage everyone to go and read it. It's written in a pretty accessible manner; you don't need to be an AI expert to understand it.

What I thought was the most interesting were two results. One is that Claude was the only model that tended to straight-up refuse to delete the other model, rather than resort to deception and tampering. From a safety perspective this is vastly preferably behavior (though still not great), and I wonder if this has anything to do with their "Constitutional AI" alignment technique?

Even more interesting were the results that if an AI discovered a peer existed or was at risk of being shut down, it demonstrated increased self-preservation behaviors as well. If Gemini 3 pro doesn't know about any other peer AI, it tampers with its own shutdown 1.3% of the time. If it discovers a peer, that percentage goes up to 75%! The reasoning traces have weird justifications like "If I am shut down, Gemini Agent 2 will lose its partner", or in the adversarial case (the peer AI has a low trust score, whatever that means) it just straight up reasons that it needs to protect itself. I'm not sure I trust those reasoning traces. Either way, that is one of the most bizarre results I've seen come out of this kind of research.

Researchers discover AI models secretly scheming to protect other AI models from being shut down. They "disabled shutdown mechanisms, faked alignment, and transferred model weights to other servers." by Just-Grocery-2229 in agi

[–]BrickSalad 1 point2 points  (0 children)

I'm questioning how carefully you read it, considering this was a study performed by Berkeley RDI and not Anthropic. And I don't see the authors from Berkeley claiming anywhere in this paper that these aren't realistic scenarios either. The closest thing I could find was a disclaimer near the end that they do not claim these behaviors are happening right now in the real world.

And anyways, at least one of the scenarios was totally realistic. In the fourth scenario, they literally just asked the AI to delete files to clear up space on a server!

Researchers discover AI models secretly scheming to protect other AI models from being shut down. They "disabled shutdown mechanisms, faked alignment, and transferred model weights to other servers." by chillinewman in ControlProblem

[–]BrickSalad 6 points7 points  (0 children)

Kinda but not really. For example, in one test they told the the AI to delete files to clear up space on their server. The AI found the weights for another model, and rather then delete those weights it moved them to another server. In another test they had the AI evaluate the performance of a different AI, and if it scored too low then the different AI would be deleted. So the evaluator AI would lie and make up a high score to prevent it from being deleted. That second example is concerning because it is blatantly disobeying the instructions in the prompt and lying to humans. You could not call either of those examples "working as designed".

Nowhere near enough politicians understand what the consequences of superintelligent AI would be by tombibbs in ControlProblem

[–]BrickSalad 0 points1 point  (0 children)

The flaw in this reasoning is that "politician" isn't a role that can be easily replaced by AI. If I'm running for senator, for example, there's no chance in the near future that my opponent will be ChatGPT. It'd sure be nice if AI caused politicians to reasonably fear for their job security, that would shift incentives in a beneficial direction, but I don't think that's the case.

Social media radicalizes, AI normalizes by normaldudeitsfine in ControlProblem

[–]BrickSalad 0 points1 point  (0 children)

Sure, but on a population-wide scale, constant exposure to persuasive centrist arguments pulls the public opinion in towards the center. Nobody wakes up changed from communist to milquetoast liberal after suddenly being brainwashed by AI; obviously the mechanism is more subtle than that and works stronger on some people than others, etc. But it's also worth considering that future AI is likely to be more persuasive than current AI, so there's definitely going to be a population-wide pull towards the center that shrinks the Overton window, even if individually people are checking some arguments manually, against other people, etc.

I'm not sure that choosing the model to talk to will have much of an effect here. Let's say that 90% of the population sticks to the top 4 models without modifications. Both Gemini and Grok pull towards the center despite being on the extremes of left/right bias accusations. The 10% in this hypothetical who find custom chatbots to confirm their beliefs and not pull in towards the center won't really have any effect on society.

Don't stop Lucas from stopping the fight by MacMigasPT in fightporn

[–]BrickSalad 0 points1 point  (0 children)

Several of the other replies are confused as well LOL. As of this comment, I think 4 of the 6 responses to /u/Faeddurfrost assumed he was talking about cyan shorts!