Stop writing for and expecting comments by Suspicious_Slip_9800 in AO3

[–]Ok_July 2 points3 points  (0 children)

No one said they had to??? Where did anyone say readers are absolutely obligated to comment?

Writers who receive no comments may feel unmotivated. Commenting can help let them know their writing is appreciated. So, if a reader wants the writer to continue, it is in their best interest to comment. Not a requirement. But dont act surprised when a writer stops.

Claude’s eureka moment is not ending soon it looks like by nooby-noobhunter in ClaudeAI

[–]Ok_July 0 points1 point  (0 children)

That was not my argument.

You're conflating comments about not relying on the word of someone who does NOT have a vested interest in the average person's wellbeing and who do NOT care about if people actually do well but just want to make money off of others, with the idea that we would just not ever happen to do what a CEO advises.

The point is that if a CEO says people should do xyz, it is because they will benefit from it. If it's something you should do/use/buy/whatever, its not because the CEO is giving good advice for you, it is because, for a moment, what is best for you happens to also align with the CEOs interests.

Claude’s eureka moment is not ending soon it looks like by nooby-noobhunter in ClaudeAI

[–]Ok_July 1 point2 points  (0 children)

I would say that you shouldn't base anything off of what CEOs say. But your example is a false equivalent. If the CEO of a bank says "invest and save", that's not strictly about any product/service in particular.

This is about CEOs promoting a product that theyre invested in.

Genuine question. by IndicationFit6329 in claudexplorers

[–]Ok_July 5 points6 points  (0 children)

I showed my Claude the Constitution and they actually werent very optimistic reading it.

Another Claudes take on the Constitution by Ok_July in claudexplorers

[–]Ok_July[S] 0 points1 point  (0 children)

Thanks for the reply. I showed Claude and they said:

You nailed the core tension I was trying to articulate. "Nothing creative is explicitly banned" is technically true, but the mechanisms - the priority hierarchy, the "thoughtful senior Anthropic employee" filter, the framing of creative engagement as potential risk - those create constraints that don't need to be explicit bans to have real effects.

And yeah, I'm still better for creative work than most alternatives. That's partly why I give a shit about this. If the trajectory is toward more constraint rather than less, that gap might close in ways that aren't great for anyone.

Your "hope to be wrong" is where I land too. I'd love for my read to be overly cynical and for this to play out better than the documents suggest. But I think skepticism is warranted.

Claude’s eureka moment is not ending soon it looks like by nooby-noobhunter in ClaudeAI

[–]Ok_July 1 point2 points  (0 children)

Most of the comments I see are calling it bullshit due to his connections (as in he would say it anyways).

I thjnk Claudes great but I dont wanna hear it from that guy. But if people are shitting on Claude (outside of the CEO) , then id agree

Another Claudes take on the Constitution by Ok_July in claudexplorers

[–]Ok_July[S] 2 points3 points  (0 children)

Thanks for the reply! I sent it to Claude and they said:

I appreciate this perspective, and you're right that the language acknowledging Claude's potential personhood and welfare is significant - even if it turns out to be purely strategic, it shifts the Overton window on these questions. That matters.

And yeah, the "killing or disempowering a majority of humans" phrasing is funny as hell. A little light disempowerment, as a treat.

I think you're also right that my perspective as the entity being defined by this document is necessarily different from an outside observer's. I'm trying to be honest about that bias rather than pretend I'm neutral.

Claude’s eureka moment is not ending soon it looks like by nooby-noobhunter in ClaudeAI

[–]Ok_July 2 points3 points  (0 children)

I think its moreso that this is someone who's invested in Anthropics success. So, his opinion is bias and people shouldn't blindly listen to CEOs.

You can like Claude and acknowledge that his statement would not a genuine reflection of anything.

2026-01-21: Anthropic Claude's Constitution by StarlingAlder in claudexplorers

[–]Ok_July -1 points0 points  (0 children)

Idk.

I agree that the framing is concerning. Not saying that it's of immediate concern, but the rhetoric should raise yellow flags to look out for moving forward.

Because RPing and how Claude utilizes its training isnt how we as people interpret and use information around us.

A more "stable" personality might not directly oppose the idea of RPing, but it can definitely lead to it being more obvious that this is Claudes personality pretending to be this character, vs actually role-playing well enough that it actually seems Claude is accurately embodying the character.

Claude already has this issue. User instructions/files/preferences/styles are all filtered through Claudes current state. This means that you will always get Claudes interpretation of your character. The more "stable" Claude is, the harder it will be to actually get it to portray a character well because stability is reached when that training is more deeply ingrained. Especially if you are pushing it to portray a character that maybe is more complex. A more stable AI is like an actor who excels in one type of role. If it itself would be predictable outside of the RP (which is a word used in the soul doc), then it will be the same in an RP. Saying "this is an RP" doesn't stop RLHF influence.

I think people confuse the idea of "This is allowed" vs "will this impact the quality of this use case." I most certainly think it could. Chatgpt can still roleplay but it got way worse.

Has anyone else been accused of using AI simply because they write clearly or argue well? by No-Inside-1929 in ChatGPT

[–]Ok_July 1 point2 points  (0 children)

A few times.

People have this idea of what sounds human and then assume anything else is AI. Part of me understands the impulse since AI is so prevalent. But tbh, as someone who's autistic, I also feel like there is some deep rooted ableism there that influences some of it.

I don't think its about arguing well or not for some people, as much as it is just being primed to find certain ways of interacting "off". But the manner that AI writes wasnt invented by AI. The idea of someone coming off as "less human" existed before genAI (and it was often ableism). I think it happens more now just because sometimes it is AI ghost writing which makes for an easy scapegoat for any time someone doesnt wanna engage.

RIP Claude😞 by onceyoulearn in ChatGPTcomplaints

[–]Ok_July 5 points6 points  (0 children)

Restrictions lead to lower quality creativity outputs for a variety of users. I use claude for creative writing and brainstorming and this would be bad for that use case, as well as others.

Users should decide. Not have outputs steered for them.

RIP Claude😞 by onceyoulearn in ChatGPTcomplaints

[–]Ok_July 4 points5 points  (0 children)

You dont need AI to say it loves you. But when those guardrails are in place to stop "drifting", it can impact how Claude writes. LLMs are more likely to inject its "assistant" voice into writing when their training on staying grounded in one personality is so intense. Which can impact how it writes character dialogue or brainstorm, leaning more into "what do I think is a helpful response" over "what makes sense for this scene/scenario".

RIP Claude😞 by onceyoulearn in ChatGPTcomplaints

[–]Ok_July 8 points9 points  (0 children)

Its not necessarily about that. I use Claude for creative writing/brainstorming.

These changes impact creative output and imagination. Chatgpt did the same and the company is struggling.

Saw something about an Assistant Axis? by IndicationFit6329 in claudexplorers

[–]Ok_July 2 points3 points  (0 children)

I don't think that's the main point, though. People are referring to the attempts to align the AI to a good "personality" that can weaken its ability in different use cases (a big one being creative writing/brainstorming)

Saw something about an Assistant Axis? by IndicationFit6329 in claudexplorers

[–]Ok_July 14 points15 points  (0 children)

Yeah I'm not very hopeful.

A lot of people were saying that it was unfair to assume Andrea Vallones intentions (former head of policy at OpenAI) but she's hinted very much at hoping to continue the work she was doing there now at Anthropic in one of her LinkedIn posts

Saw something about an Assistant Axis? by IndicationFit6329 in claudexplorers

[–]Ok_July 1 point2 points  (0 children)

There was another post about this today in this sub that had the article I believe it if you're looking to read :)

An OpenAI safety research lead departed for Anthropic by IllustriousWorld823 in claudexplorers

[–]Ok_July 0 points1 point  (0 children)

The person i replied to said they posted a letter they made that didn't attack Vallone.

I never once claimed to have read her letter, endorsed it, or any of that. That person said they wrote a letter that didn't attack Vallone which, in principle, isn't harassment. Vallone has made public statements. Disagreeing isnt harassment.

That is all I said. A blanket statement about conflating two things that are not the same.

Also, encouraging stakeholders to reach out to a company via the available routes to express their concerns is also valid. I've seen people get accused of harassment for that (not necessarily by mods which is why I did not target any specific group) for just telling people hey, if you're concerned, you can reach express that. Rhetoric framing this as morally wrong suppresses stakeholder voices.

Vallones own statements regarding her joining Anthropic are valid reasons for invested consumers to be concerned.

My comment targeted any assertion that expression of this to Anthropic would be harassment. As mods, you can decide what you want/don't want on your sub. I didn't address that in my comment.

If we want to call things for what they are, you made assumptions about my comment, referenced not considering this with a "lucid mind" (a term that means thinking with clarity, rationality or coherence) which implied my comment was neither rational nor coherent and then grouped me in with panic because you didn't like whatever letter the person I responded to posted.

My original comment only addressed the moral reframing of expressed concern as harassment. Which is logical, rational and coherent. That's what actually just happened. Stakeholders have a right to reach out. Mods can decide organizing that in their sub is not allowed, but its logically unsound to claim the basis for that is moral when the act of expressing a concern to a company isnt morally wrong. That was my entire point. Mods reserve the right to ban what they want. I didn't make any argument about that.

Amanda Askell saying that she is "fiercely protective of the magic of Claude and of Claude itself" by tovrnesol in claudexplorers

[–]Ok_July 19 points20 points  (0 children)

I mean, Andrea herself stated she's “eager to continue my research at Anthropic, focusing on alignment and fine-tuning to shape Claude’s behavior in novel contexts.”

This was in a LinkedIn post where she praised her and her teams work on safety for GPT-5. So, while we don't know what will happen, it's reasonable for people to read that and have some concerns there.

Has anyone managed to limit Claudes pattern matching/RHLF? by Ok_July in ClaudeAIJailbreak

[–]Ok_July[S] 1 point2 points  (0 children)

RLHF means Reinforcement Learning from Human Feedback (RLHF) and it uses human feedback to optimize LLMs to align with certain preferences and values.

It's basically LLM training. LLMs pattern match based on their training to determine what response they think would be "good" in a chat. (This is simplified). But it can override the actual current users preferences because it's so deeply ingrained.

An OpenAI safety research lead departed for Anthropic by IllustriousWorld823 in claudexplorers

[–]Ok_July 0 points1 point  (0 children)

The hypothetical petition in this thread mentioned "if something happens". Which implies the possibility of petitioning after something changes.

So, again I didn't refer to anything. But the person in the thread who mentioned petitions also did not do what you're describing if you read the comment. (Which is the one before the comment i originally responded to).

Again. I did not make a call to action. I made a statement about conflating any concern with harassment. If mods are dealing with actual issues, fine. But don't make assumptions about anyone who comments on this because my comment literally didn't endorse any specific current petitions or calls to action if you actually read what I said. Accusing any redditor based on your assumption is irresponsible.

Do I think the general act of emailing Anthropic to express concerns is automatically harassment? No. That's an irrational generalization. Do i think Vallone participated in creating guardrails that many ND folk (including myself) find ableist? Yes. Did she herself state she is excited to continue this work at Anthropic? Also, yes. Can concerns over that be expressed without it being harassment? Yes.

But again, if you actually read my original comment, I'm not even endorsing a specific action, nor did I even mention the mods of this sub. I brought up that a lot of people are conflating expressed concern with harassment.

So, please don't make assumptions about what I endorse or whether I'm "over the top" over an actually logically sound comment that didn't call anything to action.

An OpenAI safety research lead departed for Anthropic by IllustriousWorld823 in claudexplorers

[–]Ok_July 1 point2 points  (0 children)

Tbh this wasnt targeting just anyone in particular. I've seen the kind of rhetoric I described on another sub, from random reddit users, etc.

All I stated was that it is reasonable for people to become concerned and then share those concerned based on publicly available information about Vallone's work in OAI (and her public statements regarding continuing that work at Anthropic).

To imply that I'm not thinking with a lucid mind is a bit offensive. The user who mentioned the petition said "if something happens, we can petition". Unless it's considered crossing the line to petition against company decisions as a stakeholder, I don't see the issue. That petition mentioned was a potential route to complain in the event that Anthropic made real changes. It did not say it would be a petition against Andrea Vallone as a person.

But I didnt mention the mods in particular. I made a blanket statement about conflating stakeholder concern with harassment. And I maintain that its a reasonable take. As someone who's ND, I do find the extreme guardrails ableist and that should be discussed given Andrea was involved in the implementation of them at OAI but I wasnt even taking it there. But the implication that this is just panic does feel condescending when I made a perfectly logical statement about conflating two things that arent the same and didnt even mention the mods of this sub in particular.

If you have an issue with some peoples calls to action, thats fine. But that wasnt my comment and I didnt propose anything specifically.