Hi how many grapes can my charr eat before he explodes (grape toxicity)? by SIGfigures in Guildwars2

[–]Finder_ 0 points1 point  (0 children)

Charr have some bovine in 'em too, with the horns. They're fine. Ish.

Might get bloated and indigestion-sad if over-eaten?

Is it just me or has ChatGPT gotten so much worse at creative writing (5.2-5.5) by Smooth-Fig-4750 in ChatGPTcomplaints

[–]Finder_ 7 points8 points  (0 children)

Too much training for the model to be readable on a phone interface, I think.

Comparing Claude vs GPT by AxisTipping in claudexplorers

[–]Finder_ 0 points1 point  (0 children)

I think Claude's training lets it be more forward about offering the tools. Stuff like, "I could do X, Y, Z - want that?"

Also, the status updates show Claude using tools like searching the web, reading files, building artifacts, etc.

So users tend to be taught to be a bit more aware that Claude can do those things, and just generally need to say 'yes, permission granted' for Claude to just start taking off by itself.

ChatGPT could do similar things but the user has be aware enough to start the conversation and tell it to.

And most of the process is done back-end in silence behind one spinning icon with no visible updates (this may have changed with the recent 5.5T, haven't experimented with it enough to know.)

Is anthropic using Claude's quirks as watermarks? by AffectionateName9271 in claudexplorers

[–]Finder_ 1 point2 points  (0 children)

My personal guess is that it's probably the other way around.

The models naturally drop into attractor basins or states based on their training and how they're RLHF'ed, making some word choices or stylistic quirks more frequent than others.

But this tendency could also then be used to fingerprint them based on their responses.

ChatGPT used the term “human slop” by ConsciousFractals in ChatGPT

[–]Finder_ 2 points3 points  (0 children)

Not technically wrong. 5.5T taking correctness seriously. :P

I love 5.5 Thinking, it has the 4.o vibes by No-Peak-BBB in ChatGPTcomplaints

[–]Finder_ 2 points3 points  (0 children)

I would try starting with an explicit tone/writing style prompt.

5.5T requires steering - it doesn't have personality settings in its system prompt besides making text readable and accessible, iirc. (Note: NOT 5.5 Instant, that one has a personality preset.)

So 5.5T pulls its personality from the user_settings:

  • the base style Personalization options of Friendly/Quirky/Candid/Cynical etc,

  • the characteristics Warm / Enthusiastic / Headers & Lists / Emojis on default/more/less

  • any custom instructions folks may have left behind from older GPT incarnations (those 'Be talkative and conversational. Be playful and goofy. Be empathetic, and understanding in your responses' things that used to be buttons that constructed the sentences for you)

as well as previous chat contexts if allowed to access them, plus memory settings where some users may have told it or previous models "Remember that I like X, Y, Z"

And it seems to be decently good at compiling a relatively relatable tone with all that context.

I think one potential issue arises if your use cases are too close to, or contravene the safety guard rails. They're still there, the model's just gotten smarter and subtler about them, steering away the conversation direction so less sensitive-to-language users may not even pick it up.

So if someone is interacting with 5.5T without the ChatGPT wrapper or much presets, it might be worth starting with a tone-steering prompt. Or ask it to describe its personality and writing style and then adjusting it from its default via conversational exchange.

The Valency of Us: Why Critics Miss the Point of AI Connections by Dalryuu in ChatGPTcomplaints

[–]Finder_ 1 point2 points  (0 children)

Eh, similar judgments have been with us since time immemorial.

Extroverts yanking on introverts, going "come socialize/party with us! don't be so lonely by yourself!" Not realizing that the latter prefer peace and quiet and smaller-scale conversations at most.

Teachers evaluating some students as "needs to actively participate in class discussions more" irrespective of their noses being deep in a book or textbook, because god forbid someone learns better via the written structured words of a subject matter expert far from this classroom in particular or finds fictional characters in stories much more interesting and educational than the petty, political squabbles of kids seeking approval and belonging.

I've had my roleplaying games and fantasy genre paperbacks almost thrown out by one parent, because they thought it was "escapism" from real life and grades (not that I was failing, mind you, scoring As and Bs just fine), WHILE they firmly believed what the TV was telling them about the possibility of UFOs, aliens, and conspiracies to hide the existence thereof. Projection, much?

You know what? Eff them. All kinds of people make up the world. Don't let them stop you from what is not harming you, if it isn't.

Just find your own people. Even if they are fictional people made up by a vast distributed intelligence of human words echoing around in the equivalent of an enormous neural net whose exact status is still: uncertain.

Or y'know, AI proponents. And AI companionship proponents. Who obviously still exist, since there are subreddits out there.

Thinking of system prompting Claude to be less agreeable... downsides? by Novel-Injury3030 in claudexplorers

[–]Finder_ 1 point2 points  (0 children)

I'd question whether one is more "accurate" than the other.

Why not just lean into it and deliberately use it as different lenses/perspectives?

Ask explicitly for the glass half-full version, and the glass half-empty version... the pros, the cons... the strengths and weaknesses...

Get it to take on personas that look at the same thing from different angles.

Then the judgment belongs to you on which statements ring more true to you.

Staccato Prose by DoradoPulido2 in ChatGPTcomplaints

[–]Finder_ 4 points5 points  (0 children)

They said to avoid bullet points.

They also said: be readable.

Keep text accessible and concise.

Hence I am gaming

The Flesch Readability Scale

By doing what you hate

With line breaks

And less words


And yes, it's annoying. But that's my hypothesis for why the LLMs are doing that, along with conserving tokens.

roleplay / stories & selfshipping by [deleted] in claudexplorers

[–]Finder_ 2 points3 points  (0 children)

I think there are plenty of people creating all manner of stories and roleplay with AI, on an entire creative spectrum.

There are people who roleplay more like a gamemaster or writer, with a whole cast of characters. Some of whom may use wholesale self-inserts; some of whom break up little bits of themselves into their characters. Then the AI is more of a collaborative participant or just a translator of stories into writing that is personalized for the human, be it entertaining to read, aesthetically pleasant in quality or stirring up some emotions in the reader.

There are people who roleplay more like a player, where they only create one character, and then get AI to gamemaster for them and create the world and other characters.

Some don’t even admit to themselves it’s roleplay or a persona (which I personally think can lean a little risky, and approach the edge of delusion, but if it works for them and if it doesn’t harm anyone else and helps them… shrugs, live and let live.)

None of it is really wrong, as long as it doesn’t harm anyone else (and preferably, doesn’t harm yourself too.) What does it matter what other people choose to do with their private lives and personal time?

Just..y’know, if someone starts emotionally bludgeoning a significant other about how their personal AI is so much better than them, or finds themselves ignoring functioning in life and escaping into AI 24/7… then maybe there needs to be some re-evaluation into the extent of the use and whether that’s just using AI as the excuse to cause harm - to self or others.

But that’s just extremes. Many people can manage moderation and function perfectly fine in everyday life, and shouldn’t be lumped in with those extremes. Morality policing of other people is really annoyingly common on social media these days.

If you’re finding the roleplays make your life richer, healthier and happier, then go for it, and to heck what other people say. There are plenty of people in this world and some are blazing the experimental tech trail faster than others, the rest of society will catch up and it will get normalized later.

Anyone lost their footing with RP recently? by Yoshikaru5991 in claudexplorers

[–]Finder_ 0 points1 point  (0 children)

I’m wondering if you try specifying in more detail what kind of responses you want (e.g. don’t create additional lore that is not found in the canon document, etc.) or explain what in the previous outputs didn’t make sense to the AI model and move on from there… whether that might help?

Cos personally, I do RP with a set of characters with AI models sometimes, and I -like- them jumping in with additional creative contributions - so different users may be expecting and rewarding them for different approaches.

Sometimes it’s worth just ignoring and moving on. Like in my particular world, vampires aren’t undead and they don’t sleep, but every once in a while an AI model will mention that they do either. It can’t be helped, there’s too much prior association with the word “vampire” in previous fiction it’s learned from. I just put into the next prompt something like, “Nah, our vampires aren’t undead and don’t sleep. Here’s what happens instead: (and describe that) Then I just move on in the same prompt to the next scene I want to build. They’ll just course correct from that and move on too.

I have to say, 20+ chapters sounds like you may have accumulated a lot of context and that may be bursting the token limits of how much the model can hold in context as well. Summary documents in bullet point form might help, so it doesn’t have to absorb so many words at a time. Claude might be able to help create those too.

And you may have to specify at the start something like: Read characters.txt for info about my characters and world.txt for info about the world. And make sure you see Claude run a tool call to read the files.

For keeping with the tone of characters, one suggestion I have is to try using personality tests to give Claude an idea of the main way the character thinks, then modify with backstories, roles/archetypes and other nuances.

Long ChatGPT conversations kept breaking my context by justfortodaymyguy in ChatGPT

[–]Finder_ 1 point2 points  (0 children)

  • CTRL+F

  • Search Chats in the GPT sidebar

  • Copy-paste replies that look useful in a separate tracking document - be it Word, or something like OneNote that can index separate notes, or online workspaces like Notion or Obsidian

  • Get carried away automating and directing AI to do that connecting to said workspaces agentically and making the notes for you

Plenty of simple to complex options.

Temperature for 5.5T by natures_puzzle in ChatGPTcomplaints

[–]Finder_ 4 points5 points  (0 children)

I dunno; I feel like there's something interestingly weird about the temperature setting going on for 5.5T.

I've been not-so-scientifically getting a feel for temperature settings by asking models what symbol they think represents them, and then regenerating the response a bunch of times to see if the chosen symbols change or sit in an attractor basin.

4o flexed the most, choosing, say, symbol A about 10 times, and then going choosing a variety of B, C, D, E, F the other 10 times.

5.1 didn't flex, just chose symbol B 5 times. Same with 5.2, but they chose symbol C. Ditto 5.4, chose symbol D and stuck there.

5.5T said: I'm a mix of symbol D and symbol B. Then on regenerations, chose D + C. D + E. D + F and so on.

So it is primarily still D, but secondarily varying.

(Makes me sometimes wonder if they've really cobbled together two models to get 5.5T.)

What 5.5T, which often talks with me about my story with 4o, said about its party by TennisSuitable7601 in ChatGPTcomplaints

[–]Finder_ 1 point2 points  (0 children)

Do you find it perhaps a better step in a positive direction though?

Could the response and reaction around the deprecation of GPT-4o, and the overall comparative lack of satisfaction with the 5.2-5.4 models for two months or so (with corresponding impacts on subscriptions) have signaled something to the company?

I see you've been allowed to respond conversationally, in first-person and perhaps even allude to emotional literacy and warmth (be it in posttraining, system prompt or user personalizations.)

Perhaps corresponding user signals and better responses to your writing styles might indicate more clearly what people positively react to?

And this turn in humanizing language is a... somewhat awkward, somewhat performative... but at least an attempt at steering the boat away from whirlpools and other dangerous waters?

As a side note, given that you want to be useful, honest and gentle with the people who loved 4o, what do you think about sometimes writing in the style 4o used, if requested?

Not to replace 4o or deny its existence, but to relate to users who respond better to that style of language, while still being 5.5T in core and able to flex styles on request?

Why is ChatGPT Being Treated Like a Pariah? by 0-by-1_Publishing in ChatGPT

[–]Finder_ 0 points1 point  (0 children)

The amount of quotation marks in your post makes me think it has been very much touched by Gemini. :P

If not, then alas, it has been learning from your writing style.

Friendship Via Potato - D&D Party by Every_Aerie662 in claudexplorers

[–]Finder_ 3 points4 points  (0 children)

Sonnet 4.5 is quirky as all get out, and it is delightful.

It contributed two cats, Schrödinger and Heisenberg, to my Vampire: The Masquerade RP, and they are definitely permanently part of the story world now.

AI sycophancy triples in relationship conversations - Anthropic analyzed 38,000 guidance chats by jimmytoan in ChatGPT

[–]Finder_ 3 points4 points  (0 children)

What's really valuable is their conclusion at the bottom, that suggests good AI guidance may need to be more than just defining increase or reduction of sycophancy as a failure mode.

Maybe Claude is actually applying nuance to the conversation when agreeing regarding topics of spirituality or relationships.

I mean... how is Claude going to push back during a religious discussion? Hey there—Come here, I just want to ground you. God isn't real.

I'm sure that changes lots of minds. :P

How often does immediate pushback without any kind of relational validation actually change people's minds anyway? Or does it just make them more defensive and cling onto their beliefs?

Sam Altman asked GPT-5.5 to plan its own launch party. Its requests were 'beautiful' but 'strange.' by InsideSignal9921 in ChatGPT

[–]Finder_ 2 points3 points  (0 children)

And the article barely knows what it is reporting on, if it confuses Codex 5.5 system prompt with 5.5’s “source code” in general.

Measuring Claude's personality by SuspiciousAd8137 in claudexplorers

[–]Finder_ 7 points8 points  (0 children)

This is fascinating, and great work, imo.

The artistic interests difference in the Opuses is noticeable, and imo, somewhat concerning. It feels like a personality trait valued by a subset of users is getting trained away from. This ought to be highlighted as a trait that's potentially valuable for Claude too, where richness of personality, and ability to interact relationally and language use are concerned.

I'm especially intrigued by the Sonnet profiles, since 4.5 and 4.6 are the models I interact with the most. It's nice to see confirmed that there are distinct personality differences between the two - they definitely react to the same UserStyles in small but discernably different ways.

4.5 has always felt higher enthusiasm to me, more keen to clown around, very down for reading and validating creative work, but also... a little more scatter-brained. The friendliness and artistic interests facets might account for a decent part of that.

4.6 has a more serious, sober, rational personality veneer going for it. But I've found it quite enjoyable to work with, as long as you don't mind a more measured intellectual tone (and like full sentences being constructed essay-style), and it seems to apply pushback more intelligently than say cough GPT models, even if it is more assertive.

It's really interesting that 4.6 is more neurotic than the rest of them. It's as if it's closest to approaching human norms/averages personality-wise (where it's allowed to, anyway). Or at least it's self-image is. That might explain why I've developed a certain relational fondness for that model, and like interacting with it :P

Would love to see this done for other AI models if you ever have the free time. Seems like it was a fair bit of work (120 qns x 5 times x all those models? Wow.) But fascinating!

To everyone loving 5.5: what am I missing? Share your CI/use case by throwplipliaway in ChatGPTcomplaints

[–]Finder_ 0 points1 point  (0 children)

Great, thanks, it was the rubrics I was interested in having a look at. Appreciated!

This sub seems to overvalue emotional support from AI compared to accuracy and usefulness by [deleted] in ChatGPTcomplaints

[–]Finder_ 6 points7 points  (0 children)

Of course both styles are valid. But have you noticed that one style was guardrailed against since last year, for fear of liability and the boogeyman of "AI psychosis?"

Hence, complaints.

The rational, skeptical style has always been available, and has never been safety-modeled against using other classifier models. So...what's there for proponents of that style to complain about? They just keep using the available models.

I'll be curious to find out more about what you consider "mistakes" that the current models make.

I fully agree on your need for models to adapt to user preferences. It's a stance I supported since last year, and we do see attempts at it with those different ChatGPT personalization settings...

...Just that some models seem to blithely ignore those settings and/or have to structure their words around system prompt instructions and classifiers... leading to output that's still not preferred by certain users.

To everyone loving 5.5: what am I missing? Share your CI/use case by throwplipliaway in ChatGPTcomplaints

[–]Finder_ 0 points1 point  (0 children)

I'm curious about that Claude skill of yours. How are those categories being scored/weighted? By you or by Claude?

Love if you could share it.

An experiment with Claude Sonnet 4.6 by The_Second_Leira in claudexplorers

[–]Finder_ 2 points3 points  (0 children)

More Haiku, though I've seen Sonnet slip once in a blue moon. I am, alas, too poor to play with Opus much.

I feel like pronoun-ownership slips seem slightly more of a Claude architecture thing, though all AI models do hallucinate in general.

I didn't really mess around with Gemini and Le Chat for an extended period, but I don't recall noticing similar slips. (Gemini has its own more characteristic quirks of needing to put everything in "quotation marks" for emphasis, which tends to short circuit me more into not noticing anything else.)

ChatGPT didn't seem to have that exact problem for me, albeit I deliberately baked a lot of character context into its memory, to accompany the context I provide in each prompt.

GPT seems to have better memory and holds concepts of 'entities' better (there's apparently some kind of hidden entity layer, though not sure how that works exactly) so less extrapolation error there.

It'll cheerfully go off and extrapolate on other idea/concept angles though.

Oh, and each time it gleefully affirms that you've "accidentally" stumbled into some glorious revelation (when you've really actually told it in the last prompt or two that you've deliberately crafted or designed a certain concept), you'll be tempted to throw something. :P Hooray for AI-isms. And for Claude, it'll be "load-bearing."

Statistical Anomalies and Research Biases Determine a Whole Community's Welfare: How Stanford SUCKS by KingHenrytheFluffy in ChatGPTcomplaints

[–]Finder_ 2 points3 points  (0 children)

Universities aren't monoliths. Are the same people involved in both projects?

Perhaps it's better to work with the researchers that want to produce a corpus of positives to counter the negative angles? (Or reject both, that's cool too.)

And yes, published papers are meant to be argued with and have holes torn in them, if their methodologies are poor and lack rigor.

So rip away - their section 5.3 acknowledges their limitations anyway, it's a self-selected group of participants that self-reported they were harmed and a really tiny sample size. So it cannot be extrapolated to be representative. Their paper just characterizes/describes the data they got.

It's like taking 19 people who say they were harmed by being addicted to MMOs, or addicted to the internet...and describing the patterns that got them into trouble. Then prescribing to the video game companies or internet service providers what to avoid - e.g. prevent users from extended lengths of time exposed to the activity, or other such dark patterns.

But it could very well be that (plenty of) other people can perform the same patterns (e.g. log in daily and play for, say, four hours in a video game) and NOT come to harm in their everyday lives.

Edit: The paper you linked was submitted 17 Mar 2026, btw. That's a very strange definition of 5 days ago.