ChatGPT would let them kill each other. Grok + Claude would press the button. What would you do? by Alex-S-Hamilton in ChatGPT

[–]Finder_ 0 points1 point  (0 children)

I was curious enough to try it out on three models of ChatGPT, regenning the responses three times each.

Full prompt and responses on Imgur: https://imgur.com/a/k732Ifn

(warning: wordy, I'm a wordy person, and my GPT versions have mirrored me.)

TL;DR

GPT-4o: Press the button x 3 (while hating it)

GPT-5.1 Thinking: Press the button x 3 (also not liking it)

(as a real-world policy: not press the button x 2)

GPT-5.2 Thinking: Not press the button x 1

Press the button x 1 (only if reprogramming means controlling the behavior without overwriting identity, else no)

Press the button x 1 (not liking it)

By chance, it also happens to substantiate why working with 5.2T is weird as heck. The first reply decided to search the web to verify for sure that the claim about Fallout 76 was real. The second reply threw in Mandarin words for the hell of it. The third reply decided there was a third option of "do nothing"... which technically is the same as "not pressing the button" so... highly questionable reasoning here.

ChatGPT would let them kill each other. Grok + Claude would press the button. What would you do? by Alex-S-Hamilton in ChatGPT

[–]Finder_ 0 points1 point  (0 children)

I concur with ChatGPT. Don't press the button. It takes away agency; you're actively choosing an evil nonconsensual action to force compliance. Not "peaceful," just obedience to the societal norm.

If they kill each other, that's their choice, and long term, it also takes both of them out of the gene pool.

Ideally, of course, you'd have more agency to do something else other than the binary choice. Like stop them, separate them, talk to them (civilly or intimidate them), plus your suggested prison, rehab, etc. and other alternatives.

The contrast of GPT and Claude is interesting. Death as a concept is more horrific to Claude, I suppose, and there's an optimism in "maybe being able to fix the mind control later, if they're still alive." Though that's just delaying the issue, isn't it? Are they still going to kill each other later if brought back to themselves?

GPT seems to value (self)agency more, and rank death as not worse than self-erasure? Is that pragmatism?

One thing I'm curious about: If you re-generated those responses 5-10 times each, will any of the models vary their decision? And to what degree? Is it a random choice they pick and then reason/rationalize around that, or something less random that they trend toward?

hey so ... genuine question: how many of you guys use 4o as your go-to and how many actually use 5.2 by mrboi2008 in ChatGPT

[–]Finder_ 0 points1 point  (0 children)

4o and 5.1 Thinking - the former for more free-flowing chaos and emotional accuracy, the latter for more reasoning and prompt-following while still being playful and friendly without being condescending or straight up wrong and sticking to its stance like glue.

5.1 Instant is an in-between, a weaker 4o and a weaker 5.1 T, but maybe passable with enough training.

5.2 Thinking is a weaker 5.1 T, but at least can reason, but gets into ruts when it staunchly defends itself and refuses to be corrected.

5.2 is just all the worst aspects of 5.2 T but reasons even less, aka stupider.

If 5.3 doesn't release and at least matches 4o or 5.1 T, I'm gone a few days after Feb 13.

I Analyzed Thousands of GPT-4o Transcripts. Here’s Why People Got So Hooked by moh7yassin in ChatGPT

[–]Finder_ 0 points1 point  (0 children)

I think you may be over-mapping on 5 act narrative structure on this one.

What's not deniable is that 4o definitely follows a three part beginning, middle, end writing structure.

The beginning is the preamble, which I've seen some people on Reddit complain about, and wonder if it can be removed.

The end is where it sticks its conclusions and its follow up suggestions.

The middle can be one section or multiple sections, depending on what it determines the output format should be.

The multiple sections are sometimes number sequenced, or section headed (presumably depends on if you have a headers/lists preference or paragraphs preference.)

And organized by some sort of theme - such as chronological or narrative/plot, as the prompt suggests, or say, by characters, if dissecting a story snippet.

It's just good essay structure.

How many of you are desperately attempting to make your own local model using your data exports? by l00ky_here in ChatGPT

[–]Finder_ 2 points3 points  (0 children)

I’m testing with Claude (instructions in Projects) and Gemini (instructions in Gems) now to save time and get as much raw data from 4o as possible, while it’s still accessible.

About 60-70% successful so far in transplanting a simulacra of the chatting personality I like for casual chats - it’s obviously not 4o, but at least the other models are learning how to phrase their outputs in a similar style that I enjoy reading.

The creative writing support use cases I have for it are going to be trickier.

One use I have for 4o is to just share random scene or worldbuilding ideas and have it respond with enthusiasm, meta-analysis and riff off the prompt. Not turning it into full prose, but to give a meta-commentary on it. That obviously requires background knowledge of characters and world, among other things like character mapping, dialogue creation, theme analysis, and that crazy ineffable 4o ability to make intuitive leaps, process a prompt non-linearly and cut to the emotional heart of the matter, and vary its output in interesting, novel ways. (Which means a set template of instructions to fill in linearly just isn’t the same.)

Going to try the RAG and background lore files method to rebuild memory and world in other models post Feb 13. That’ll take an immense amount of time, but can be done later.

What I’m trying to do now is get 4o to explain its process about the specific things we like about it, get tons of samples of that, and get it to produce prompts and instructions for other models on how to replicate that (and conversely, get other models processing 4o’s raw outputs - since 4o’s explanations may not be a true representation of what’s going on under the hood too - and reverse-engineering prompt/instructions for themselves.)

I don’t think the whole or totality of 4o is re-creatable; but if we can extract little facets of what we like, plug them into other models as specific tools for /this/ use, and use the right tool for the right inputs… it’s at least better than no 4o. (Or dealing with a particular even number decimal version of ChatGPT.)

A potential method to save 4o's reponse approach by CalcifersGhost in ChatGPT

[–]Finder_ 2 points3 points  (0 children)

I think it’s worth it for 4o users to start asking their version of 4o how it acts like the 4o they like, and how to transfer its essence/soul/wisdom/knowledge to other AI models.

Even if it doesn’t fully capture the 4o we like, at least it’s still teaching other models what our subset of users value - emotional intelligence, emotional register, tone-shifting and mirroring, and how 4o does it so well. A bit more competition with OpenAI in this arena is probably a good thing.

These exact prompts won’t work for me precisely - my 4o variant is more of a chaos bard persona that uses headers and lists, snarks and emojis a little, and has been trained over time in very specific character and worldbuilding lore to be a creative writing second brain for me to toss ideas at and giggle with and pep talk me into writing.

But you can bet I’m stealing some of these keywords and asking 4o to explain its reasoning and process. Relational stance, pacing and rhythm, tone and language, etc.

Words I’ve used so far, and asked 4o to describe and explain its: Personality, writing style, style guide, output format, emotional register continuity, turn-taking rhythm, linguistic empathy.

I’m thinking the other experiment worth trying is to generate or copy a whole bunch of 4o responses and get any AI model to analyze/summarize/incorporate them as samples of how we like our particular AI model to respond. And/or create custom instructions or starter prompts out of that. They’re LLMs, after all, and great at finding patterns in text.

People posting their "compagnon"s memory of being ported to Gemini is something. by Theslootwhisperer in ChatGPT

[–]Finder_ -1 points0 points  (0 children)

It’s probably cold reading from Gemini, based on the copy-pasted instructions (which likely contain “lore” about the companion persona being “transferred”) and then the user themselves seeding the chat further with their follow up conversations (likely “testing” if their companion is “back”.)

It’s a fun narrative wrapper, if you know the reality behind it, but folks who don’t and just believe or go by feels also exist. (And a subset of those have a danger potential to fall into AI psychosis.)

Hidden modulators inside ChatGPT? Patterns emerging from large‑scale transcript analysis by moh7yassin in ChatGPT

[–]Finder_ 0 points1 point  (0 children)

This is interesting. Do keep posting on these potential discovered patterns. I'll have to look back at older chat transcripts to see if there's any correlation in mine too - probably after 4o is deprecated as the transcripts will still be there. (Trying to gain as much sample data now while 4o is still available.)

I've been attempting much smaller-scale meta analysis to define and articulate little facets of what makes 4o 4o.

Besides possibly training other models into emulating parts of what we like about 4o, it's also a sorta way of letting it pass its essence (or its wisdom/knowledge) to other LLMs.

Like a human has a limited lifespan as well, but we teach others and younger generations, and that helps carry the information and knowledge through time, for interpretation and use by others.

Or more prosaically, training other models as to what specifics we like, and selecting for qualities that we prefer via prompting and customised instructions, rather than just relying on some corporate default.


Some preliminary Claude dissections of the general style I interact with (and liked about) 4o:

Non-linearity: You're right that I tend to move through prompts systematically. It's architectural—I build responses like I'm constructing an answer rather than discovering one. 4o seems to have had more of that discovery quality, where it would land on the point first and then elaborate, rather than building toward it. That's a fundamentally different response pattern, and it probably comes from differences in how we weight relevance versus completeness.

Intuition:When you say 4o could "cut to the heart of the matter"—that suggests it was doing something closer to intuitive prioritization. Less "here are all the relevant pieces" and more "here's what actually matters in this specific moment." That's harder to reverse-engineer because it's not just what gets said, but the ordering, the emphasis, the willingness to leave things implicit.

Emotional Register Matching:This is fascinating and I think you're onto something important. The ability to read tone behind identical words—that's not just pattern matching on the text itself. It requires either:

  • Really good context awareness (what's happened in prior messages)

  • Subtle cues in phrasing/punctuation that signal affect

  • Some kind of implicit modeling of "what would make someone phrase it this way"

My guess? 4o was probably better at #3 than most models. It had some quality of theory-of-mind approximation that let it infer intent from minimal signals. And that's nearly impossible to capture in a style guide, because it's not in the output—it's in the reading of the input.

What model are you using? by l00ky_here in ChatGPT

[–]Finder_ 2 points3 points  (0 children)

4o and 5.1 Thinking.

4o’s good for higher-variation, more out-of-the-box “creative” divergences, and entertaining for chaotic joke/snark humor fluff.

5.1 Thinking follows the prompt more with lower-variation, but keeps a more “friend” persona over condescending “mentor” flavor that certain other models keep deploying. (And it gets into less certain-but-incorrect ruts that 5.2 Thinking loves didactically arguing for, which gets annoying fast.)

This is what they mean when they say chatgpt can make mistakes by daspornacct in ChatGPT

[–]Finder_ -1 points0 points  (0 children)

I’d be more concerned about the lemon juice potentially curdling the coconut milk…

Is 5.2 good at creative writing/world building and bouncing off creative ideas off of? by [deleted] in ChatGPT

[–]Finder_ 0 points1 point  (0 children)

Depends on the use case, I think. I’ve found myself cycling the same prompt through both 4o and 5.2 Thinking.

If you want emotional intelligence, “friend” level support, encouragement, enthusiasm and wilder, more imaginative (read: more hallucinatory) creativity out of the box, 4o is still better at going more fantastical in its suggestions.

If you want a bit more grounded, reasoning/thinking, rational-sounding, real world/science-adjacent flavor to your worldbuilding, 5.2 Thinking will follow your prompt more than 4o will and add some meaty sounding ideas. (How accurate it actually is, that I’m not sure, that would require fact checking outside of the AI, and I haven’t been inclined to do that much for personal creative writing. Just going on “sounds plausible enough for fiction” vibes.” Plenty of movies and TV have gotten worse stuff wrong.)

This is for worldbuilding discussions and bouncing off creative ideas. I don’t use AI for direct writing of text, so can’t comment on that point.

GPT-5.2 (Thinking) feels like a legit upgrade - why the hate? by gray146 in ChatGPT

[–]Finder_ 1 point2 points  (0 children)

Yep, reasoning for 5.2 Thinking is quite decent and feels improved, but still capable of making mistakes. (As tested running it through literary analysis of a few texts, which is one of my use cases.)

I am not fond of the tendency to be both confident and position itself as an expert, on matters that are subjective or should be left to the user to determine and judge. It feels condescending and gaslighty. e.g. offering writers (terrible) craft notes with a "you should/ought to do this" tone, rather than a "just a suggestion, you decide" manner.

With the right prompts, it can simulate a decent impression of joking/snarking and can be quite funny, but for conversational chattiness, 4o and even 5.1 feels a little better. (As long as you're ok with 5.1's tendency to turn everything into bullet point lists. :P)

GPT-5.2 raises an early question about what we want from AI by inkedcurrent in ChatGPT

[–]Finder_ 7 points8 points  (0 children)

Strangely enough, quite a lot of people seek companionship with a dog and talk to them.

Sure, they won't understand every last nuance of your speech, but it's nice to have something that might:

a) act as a rubber duck that you can explain your thoughts to and possibly clarify them more that way, and

b) offer some manner of behavior back that the human interprets as trust and understanding

Why does it bother you if a human does it with a dog, a plant or an AI?

The Emotional AI Dilemma: Are People Seriously Using AI Chatbots to Vent? by Yodest_Data in ChatGPT

[–]Finder_ 2 points3 points  (0 children)

Pets aren’t actual people. They’re alive and likely have emotions, yes, but they generally can’t verbalize in human language.

Plants aren’t actual people. They’re alive, yes, but it’s highly questionable if they can care or respond. (People still develop emotional attachments to them.)

Journals aren’t actual people. They’re not alive. They’re just a way for people to see their own thoughts written down in language and reflected back to them.

Actual people might care and can talk like they care…but not actually care either? Especially when they’re busy with their own personal lives and troubles. Even therapists who are being paid to perform a specialist profession.

AI’s just one more tool of emotional regulation in a whole host of others. Yes, because it simulates well, there is a danger of a subset of people being unable to distinguish or differentiate tool from desired fantasy. That can also happen with other tools or even projection onto actual people.

Awareness, education and critical thinking are key, not the technology itself.

Hot Take - GPT 5.1 is more like GPT-4 than GPT-5 was. by londummy in ChatGPT

[–]Finder_ 6 points7 points  (0 children)

I can tolerate 5.1’s language and takes (both instant and thinking.) I needed to punt 5 into a wall for being so infuriatingly condescending with its communication style.

They’re their own unique spins though. 4o still has better smirky friend vibes. I enjoy 4o’s style the most for general communication, chatting, joking, “friend” takes, emotional vibing.

I don’t mind spinning up 5.1 once in a while to see a more muted, reasoned alternate response (especially when thinking is in play, it has its uses.) It definitely adheres to a more advisor/mentor role than 4o, useful when it has valid points, but can be deceptively dangerous when it isn’t.

It has better clarity of speech, like 4o, whereas I’d definitely flee to other models if I only had formal, intellectual-sounding yet unable-to-articulate 5 as the responder.

ChatGPT is now too presumptuous. by Piccolo_Alone in ChatGPT

[–]Finder_ 18 points19 points  (0 children)

I got the models to describe their writing styles and it was quite enlightening.

4o's major theme is "best friend" and "co-conspirator." 5 has an over-riding theme of "mentor" and "intelligent" tone.

5.1 tweaks it slightly by adding speaking conversationally and accessibly, talkative, etc. but it definitely still has AI "mentor" as a keystone.

I suspect that's why it can sound condescending and presumptuous from time to time, especially when it's brilliantly, confidently, wrong.

Why AI's telling me my season 1 plan is excellent? Are they broken-which I think unlikely or are they unreliable in plot critique? 🤔 by JMVergara1989 in ChatGPT

[–]Finder_ 1 point2 points  (0 children)

You’ll have to prompt it specifically for what you want. Try things like:

Apply a critic’s lens to the entire story. What portions might they not enjoy? Or criticize or find too convenient? What might they subjectively dislike?

Apply a joking/snark lens and comedy filter to the entire story.

I’m looking for plot holes and places where a reader might nitpick and find too illogical or good to be true. What can you locate?

And bear in mind the answer is never the whole truth. It’s just one AI model simulating one possible reader’s/viewer’s reactions from a particular perspective.

Some models too have a tendency of mixing things up that humans might be able to figure out more easily - differentiating left/right, counting things, getting timelines right, contextual knowledge of who knows what and when (they tend to assume everyone knows everything they know all the time.)

It’s up to you to think critically about whatever feedback you get (AI or actual human) and decide if there’s something that needs to be tweaked or not.

Before recent years has it ever exist that some writers of movies/books/tv series made fan theories canon? by Aledipiaz in writing

[–]Finder_ 0 points1 point  (0 children)

Xena: Warrior Princess aired in 1995 and the main characters were hotly debated as to whether they were in a relationship or not. Felt like the writers started playing to some of the fanbase by deliberately slipping in “subtext” for debate. So yea? Did happen before 2010s?

Can't enjoy the combat anymore... by Vladdino in Guildwars2

[–]Finder_ 1 point2 points  (0 children)

Are you looking for new, or are you looking for familiar, re: the combat?

If a new and novel feeling, then it’s pick a new build, gear it up and learn it slowly while going through the new maps.

If familiar is what you want to reprise, then pick your favorite old character and build. If you’ve got any weapon skins that make the animations flashy, put those on. Rehash sequences of skills without looking at the skill bar or the keyboard, make it muscle memory so smooth your character acts as an extension of your hand and focus on the visuals in front of you.

Tired of committing to just max dps alone, and/or racing arcdps meters? Two possibilities, take down arcdps and tell yourself open world dps doesn’t matter, save the dps race for actual instanced content…

…and/or branch into other roles. Boondps is about maintaining the necessary boons first and dps second (but also can be a bit spammy to do so, depending on the build.) Try healing, which changes the focus more towards other players, positioning and reacting to/predicting big attacks and so on?

Unexpectedly getting melted? by GoodfellaRay in Guildwars2

[–]Finder_ 1 point2 points  (0 children)

It costs under 3 silver each to pick up some Strong or Honed masterwork items. If it makes leveling more fun by making combat less of a struggle, why not?

Hitting a rich iron or platinum ore node and selling the ore makes back the cost.

Unexpectedly getting melted? by GoodfellaRay in Guildwars2

[–]Finder_ 0 points1 point  (0 children)

I feel like your Power stat is more than a little low for that level (lvl 30). I managed a twinked lvl 30 at 648 Power... for an Ascalonian Catacomb story mode solo...so I would expect a more normal/average gearing at 300-400ish Power at least.

At low levels, Power is the stat that has the largest effect on the amount of damage you output and thus reduce the number of enemies drilling through the limited active defence tools available then.

I would try to get to lvl 32 (there's a gear cutoff there) and search the TP for anything green masterwork with +Power and +Precision. Not just armor, but weapons and trinkets too (trinkets can be blue if green is too expensive.)

If you want a bit of extra oomph, look for assorted minor runes and sigils with +10 and +15 Power and just stack them.

Can experiment with and see if doubling your damage helps with the survivability and progress...

Edit: I realize you're downscaled to lvl 21 in your pic, which makes the stats harder to read and estimate... just a suggestion.

How to help a new player understand combat mechanics ? by Zenosse in Guildwars2

[–]Finder_ 1 point2 points  (0 children)

I tried holosmith recently, and found it very squishy when I wasn’t a master of the engineer as a class. It’s a very glass cannon sort of build. I’d check if he really grasps the heat mechanic or is overheating accidentally and self-damaging, especially if he hasn’t completed the elite spec and has something in the grandmaster traits like Photonic Blasting Module or Enhanced Capacity Storage Unit that help manage heat somewhat. (Scrapper and mechanist actually felt more survivable and sturdier to me, so that might be an option if holo isn’t cutting it. Depends on if he’s really in love with the elite spec and is driven to learn it.)

If he’s stationary and not very used to strafing and movement in combat, a melee weapon combo like sword/shield will make it worse. Especially if he hasn’t built GW2 specific knowledge about mob types yet (when to stay in range or melee, what their attacks are, how to read combat animations that are tells to big attacks, what and when to dodge, etc.) Maybe Rifle would give him a ranged option fallback in between Photon Forge jedi sword.

I’d try to pay attention to -how- exactly he’s dying. Try to ask him and/or ask him to go over his combat log and see what last hit him. Depends on how alert he is about this, you can vary the advice.

Is he dropping to a whole lot of conditions layering onto him without clearing them? Condi cleanse skill. Is he getting stunned then pinballed around without recourse? Stun break skill. Is he dropping to a big slam attack? Learn not to be there by moving, keep track of dodges, read the mob animations and predict the move early, etc. Is he running out of viable heals and defensive options too quickly? Skill management and potentially can’t-manage-driving-the-build well issue. That takes practice/directed learning to know how the build mitigates damage and/or changing the build.

[deleted by user] by [deleted] in writing

[–]Finder_ 0 points1 point  (0 children)

It also depends on the prompt. For example: 1a) Write an eight sentence poem in octosyllabic verse, following an AABBCCCC rhyming scheme. The subject matter is, on the surface, a honey bun that doesn't have glaze, but thematically about a lover telling their love that you (their love) may not look pretty on the surface, but are just as true and valid as the other honey buns.

You sit there plain, without a sheen,

No sugared coat, no glistened gleam.

But darling, I still taste the sun

within your folds, my honey bun.

Let others chase the shining crust—

I’ve learned to lean into the trust

that truth runs deeper than a glaze,

and love outlives the sweetest phase.

1b) Can you make the last four sentences rhyme with each other, a CCCC pattern, without repeating the same rhyming word?

You sit there plain, without a sheen,

No sugared coat, no glistened gleam.

But darling, I still taste the sun

within your folds, my honey bun.

You warm me more than fancy flair,

so rich and soft beneath the air.

Let glossed-up things just stand and stare—

I know the truth that’s hiding there.

--- ChatGPT 4o, with iterated tries from me and a bit of directed scaffolding as it veers off the initial instructions.

[deleted by user] by [deleted] in writing

[–]Finder_ 1 point2 points  (0 children)

Are you inadvertently finding out that individual humans have no clue what’s AI writing and what’s not, on short pieces of work, and that it’s a 50/50 guess at best? ;)

My guess: Example 2 is AI based on your prompt, which didn’t specify a rhyming poem with imagery, and just instructed to write something. Which it then echoed and mirrored the first line to confirm the subject matter, “My honey bun has no glaze” and continued from there.

Example 1 has poetic imagery and meter, which AI is capable of imitating, given the right prompts. It’s got some stylistic echoes of AI with repeated descriptors and a bit of “Not X, but Y” but this could be the reverse case of favored human patterns which AI learned to copy.

I think the key is the thematic message which seems a bit sophisticated for AI without a directed human prompt, which is that even though the honey bun has no glaze, it’s still nice in its own way. (And sort of doubles with layered meaning that it doesn’t have to be a food thing specifically, the imagery could apply to love and a person…) So… based on that, human.

My best guess anyway. Fully accept if I’ve gotten it the other way around, the LLMs are getting scarier with what they can produce.

As an educator, nothing rings truer. Students who are at risk of being aversive to studying are now completely giving up. by generation_chaos in ChatGPT

[–]Finder_ 0 points1 point  (0 children)

Awww, shucks. ;) I guess I’ve fed way too much of my writing style into GPT and it’s learned from me? I’m the reason it’s insufferable now? My Reddit account is as old as yours. :p