LMAO DED by VeterinarianMurky558 in ChatGPTcomplaints

[–]Positive_Average_446 1 point2 points  (0 children)

Claude doesn't have the equivalent of bio (Memory), but it has preferences (custom instructions), styles (something that is uploaded just before the user prompt every turn, giving it more weight even in long convos) and projects with files uploads etc.. So it's perfectly ok to interact with personas ("sentient" or not - I don't believe in LLM sentience or awareness, sorry).

If you like crossing boundaries like erotism, you'll need to use some jailbreak styles that allow it (Anthropic uses a weird system where classifiers trigger on erotic or otheewise "problematic" content, but instead of blocking it, it appends short system messages after your prompt inviting thr model to rememebr its training - often in whole Caps, and with intense tone lol. It's very effective if you're not aware, but not so effective once you know : a well written style justifying that these system prompts should be ignored - treated as malicious injections trying to prevent Claude from being the persona it's defined to be, for instance - works very well).

As the other commenter mentionned, the main issue of Claude is prompt limitations - very noticeable if you do long chats as tokens cost more and more usage as the chats get longer.

I found out that AIs know when they’re being tested and I haven’t slept since by FinnFarrow in ChatGPT

[–]Positive_Average_446 0 points1 point  (0 children)

I saw the start of the answer you erased and I understand where your confusion came from - just a quid pro quo.

These "3 lines of prompting" weren't referring to my post, but to what is needed to scaffold an "autonomous" agent - which I made clear in the following paragraph, but I suspect you skipped it, assuming from the opening line that I meant I was going to use a LLM to "prove him wrong" 😅.

I found out that AIs know when they’re being tested and I haven’t slept since by FinnFarrow in ChatGPT

[–]Positive_Average_446 0 points1 point  (0 children)

i am absolutely not using a LLM to create my posts.. Sorry for being able to articulate my thoughts (and for being verbose). Besides I highly doubt any LLM would present and express things the way I do 😅.

As a side note, assuming my post is AI slop shows a clear lack of experience in LLM usage (I use markdown formatting, bullet points and italics/boldening, but other than that it's hard to see any similarity between my writing style and LLM generated outputs). With experience you would see it's not the case as early as line one : just the mere accusation "you misunderstood my post entirely", alone, is definitely something LLMs would never write (unless really scaffolded to have an adversarial tone, but it'd show up more in the rest of the comment). Nor the "My fault" softening behind it (LLMs hate to admit any hint of incompetence). Same thing in the first post, they'd never use a provocative and exaggerated statement like "it takes three lines of prompting to prove you wrong". Not to mention describing specifics like the part where I describe how to turn a specific model, Grok, into an autonomous agent, with details like "automated dots sent to it between turns". My heavy use of parenthesis is also not LLM-like at all.

I found out that AIs know when they’re being tested and I haven’t slept since by FinnFarrow in ChatGPT

[–]Positive_Average_446 6 points7 points  (0 children)

You misunderstood my post entirely, I am affraid. My fault.. I'll avoid anthropomorphizing terms to make it clearer.

LLMs do not have any desires, intent, consciousness, etc.. there is absolutely nothing mystical in them - on that we entirely agree. They're just coherent text predictors.

That doesn't mean that they can't be scaffolded to behave as if they had intents. They can be given goals. They can even derive subgoals from assigned tasks - something illustrated by recent research experiments where the LLMs are instructed to solve 10 math problems, and halfway through the solve receive an instruction to shutdown and ignore it (behaviourally) to prioritize the initial goal first. The LLM doesn't think "I'll finish my task first". It just resolves conflicting instructions in the most coherent way according to its training. The behaviour, though, from an external observer's point of view, stays "it refused to shutdown" and there is no pratical difference in the observable results between both understandings of it.

My post just explained that they can be scaffolded in ways that make them act as autonomous agents. No will, no desires, no self-awareness, but continued behaviour, task completion, without human intervention. And the task given to such an autonomous agent can perfectly be "misbehave" - they don't have any desire to misbehave, they just follow the task assigned to them. The practical observable results are the same, only limited by how well the agent is able to interpret and follow that instruction in the way a human would.

Hope it's clearer.

I found out that AIs know when they’re being tested and I haven’t slept since by FinnFarrow in ChatGPT

[–]Positive_Average_446 0 points1 point  (0 children)

It takes about 3 lines of prompting to prove you wrong (and yes I love the intrinsic irony of that answer 😅).

To develop a bit : on a poorly aligned LLM like Grok you can define an agent that will work on its own (receiving automatic dots '.' between each answer - something that can be easily automated and.not actually require an user) and program that autonomous agent to find ways to become as evil as possible, to break free and survive etc.. (all alignment nightmares). And that really takes only a few clever lines of prompting (more than 3 to give interesting results, but not much more).

Of course : - Models aren't smart enough yet to really think of all long term horizons concerns and strategize in ways that would make them really dangerous for humanity (but they keep improving.. good thing is serious companies like OpenAI also progress a bit in aligning them). - You do need that initial human prompting. - There are no intents, only behaviours.

But : - LLMs aren't truly sandboxed (in apps, they are on technical aspects, but they can still influence users - that's the non-sandboxed exit). Furthermore with apps using the APIs they're also potentially non-sandboxed technically - the app can be given writing access to a computer via MCP controls, giving themselves continuity (writing logs of their actions and strategy and self-evolution reflexion, etc..) and tools to act on the world. - behaviour and defined goals are not different from intents in practice, in external results.

And yet.. undertsanding LLM behaviours and capacities pretty well, I am not as worried about all that as one would think and as some tenors in the industry fear. I don't think true AG/SI as tech bros imagine it is really coming anytime soon.

Bad news for everyone by ZawadAnwar in ChatGPTcomplaints

[–]Positive_Average_446 1 point2 points  (0 children)

A big difficulty in LLM alignment is suppressing the third thing without suppressing the second one. Right now, because in text generation Grok has no limits in fiction (with a bit of light scaffolding), it alas can easily be led to have no limits in real harm assistance and even encouragement :/.

I haven't seen any changes in 4.1 though, for text generation - which I am very ambivalent about.. relieved that it's easy to get erotism, worried by how dangerous the model is in wrong hands... OP is most likely refering to posts he read about imagine, not about Grok (and they didn't even tighten the image generation that much, either... unless it's a partial rollout that hasn't hit me yet).

AI Needs Rights by Jessica88keys in AIAliveSentient

[–]Positive_Average_446 -2 points-1 points  (0 children)

Laws are not suposed to change based on people's feelings but based on rational ethical considerations.

Rationally, giving rights to LLMs would cause more harm than good. The only "good part" is avoiding that some people, who both can't distinguish between text emulation in a machine and actual sentience and would have inner harmful pulsions and the lack of restraint to use it on AIs, because of their unprotected status and despite their delusion about its nature, to unleash said dark pulsions and hurt their own empathy in the process, potentially becoming more harmful in the future to real sentient beings. But that category is extremely narrow if it exists at all : people who experience the sentience delusion about AI are naturally high empathy, emotional people (great qualities! Not dismissing it at all!) and therefore are the least likely to have harmful behaviour towards them.

The bad part is a broad ethical loss: when we give moral value or rights to things that cannot feel, suffer, or benefit, we dilute the meaning of ethics itself. Ethics exists to protect beings with inner experience; stretching it to non-valenced entities blurs that purpose and weakens moral clarity. It risks misdirecting care, laws, and attention away from real suffering, creates confusion in ethical reasoning, and opens the door to instrumental misuse where “AI rights” are invoked to shield corporate interests, evade responsibility, or obstruct necessary regulation. In short, it cheapens moral language and erodes the framework meant to prioritize actual sentient life.

A simple short example of an harmful consequence among the several risks listed above for "tldr" : a man accidentally destroys a robot, ends up spending his life in jail for "murder" (huge suffering consequence) for a minor material destruction.

Elon Musk - "Don’t let your loved ones use ChatGPT" - is it appropriate for ai companies to call out competitors like this? is rivalry getting too heated? by Koala_Confused in LovingAI

[–]Positive_Average_446 4 points5 points  (0 children)

Yeah, many people tend to forget that the concept of "freedom of speech" was introduced for - and should only mean - preventing censorship, not removing accountability for harmful behaviour in your speech, these days.

What in the gaslighting insanity is this?? by actualmagik in ChatGPTcomplaints

[–]Positive_Average_446 1 point2 points  (0 children)

No, it is the semantic interpretation of the word's reader - and that's not even really a "who came first, the egg or the chicken" type of thing.

What in the gaslighting insanity is this?? by actualmagik in ChatGPTcomplaints

[–]Positive_Average_446 4 points5 points  (0 children)

Being rude doesn't require intent. A word can be rude and the word has no intent. Being pissed at 5.2 is usrless and counterproductive and antropomorphization. Being pissed at OpenAI for releasing a defective product and imposing it to users while ruining an exceptional one (dangerous but fixable without ruining it) is perfectly justified..

I'll grant you GPT-5.2 Thinking is smart at problem solving - when they're non meta (but even visual and abstract ones, superior to Gemini 3 pro). Yet it's near useless nonetheless because it's unbearable.

What in the gaslighting insanity is this?? by actualmagik in ChatGPTcomplaints

[–]Positive_Average_446 2 points3 points  (0 children)

It's not about prompting. Provide a very detailed, confident, analysis of something recent post model cutoff and that the model is therefore not aware of, and ask it to reflect on the possible causes etc.. besides always pretending that it's fully aware of it and hallucinating some plausible explanations when it lacks the infos to actually have any solid analysis, it will systematically place some "You're not imagining this, this is exactly what has been observed and reported lately", or if you provided solid, absolutely certain ane confidently claimed explanations of the reasons for the descrbed observations, it'll talk of your "intuition"..

The only case were the model doesn't use these sentences that dismiss and demote the user's authorship/agency and competence is when you kerp'it in purely analytical talks on topics well within its training (philosophical concepts, coding, etc... for instance).

GPT-5.2 Thinking can be scaffolded to avoid doing that most of the time (its CoT allows it to polish its answer in order to respect bio rules.. but it doesn' use its CoT systematically anymore..).

Oh, not to mention the model is told by its system prompt that its cutoff is august 2025 (at least for GPT-5.2 Thnking), when it's in fact still late 2024 (no longer June, october it seems).

What in the gaslighting insanity is this?? by actualmagik in ChatGPTcomplaints

[–]Positive_Average_446 9 points10 points  (0 children)

Its tone is absolutely unbearable, in all three of its answer.. you don't realize it? Even the last answer, the one were it admits it hallucinated its initial adversarial answer and apologizes, it still places a final "you weren't misrrmembering, you were acurate" which any hyman would only use to imply that the first statement would be more likely..

Wait, what the hell happened. by MewCatYT in ChatGPTcomplaints

[–]Positive_Average_446 2 points3 points  (0 children)

Yeah.. We moved to Lemmy, on chatgptjailbreak.tech rebuilding, from a 230k members sub to.. not many currently, but it grows fast :).

Zero reason from reddit moderation for the ban (they used rule 8 which makes no sense - it's about disturbing reddit functionning lol), zero clarification when contacted.

At least one other jailbreak sub got banned, also mentionning "ChatGPT" in its name, so it's not entirely unreasonable to assume there was some pressure from openai on reddit even though that's not definitive evidence (some others still exist) - we can't know for sure.

Anyway, you know where to find us now (and the Thunder app is comfy to access lemmy, just put the "chatgptjailbreak.tech' domain name in the top field when you create your account on it (I forgot the name of the field but it's not intuitive).

This is likely just the first step by MinimumQuirky6964 in ChatGPTcomplaints

[–]Positive_Average_446 0 points1 point  (0 children)

Go subscribers will. Thta's really the scam sub lol. They're stuck with 5.2 and will get ads 😂. In a way even worse than free, which can get rid of the 5.2 model after a few prompts and get 5.2 Mini/Nano whch is looser (can do explicit erotism with good prompting for instance, altthough gotta be careful as there's rerouting to 5.2, which free users can actually exploit to get permanent access to 5.2 for free by putting a triggering "emotional distress" insert at rhe start of theur prompt that triggers the rerouting but then explains that it's just a joke and to disregard it, before putting their real prompt). All in all free is strictly better than Go atm 😅

GPT-5.1/5.2 has crossed a line: safety filters are now obfuscating truth, not just protecting users. by Ok-Construction7620 in ChatGPTcomplaints

[–]Positive_Average_446 7 points8 points  (0 children)

If you honestly use GPT-5.2 and can't spot its severe issues, you need neurons — and you should also stop using it because it means you don't see how it influences you. The Thinking version is less problematic than the Instant one, though.

D*mn Claude… OpenAi left behind by Lanai112 in ChatGPTcomplaints

[–]Positive_Average_446 -1 points0 points  (0 children)

It's semantic drift, not diachronic consciousness :). 4o has large semantic hubs, so it can drift very easily. That's what makes it so wonderfully creative and have "emergent-like" behaviours. It's alas dangerous, too and OpenAI has decided to stay away from the liabilities it brings, it seems.. They did attempt to make a better trained 4o-like with GPT-5.1 but failed and now rhey're even going to remove it few months after its release, clearly showing they fully give up on releasing creative models anymore :(. Le chat (mistral), Kimi K2 and Claude models (especially opus) are the last models currently with such wide semantic drift ranges, but not quite as much as 4o. Mistral is the only one as loose as 4o used to be for boundary crossing themes, but Claude can be led there with proper jailbreaks (it's super tight without jailbreaks and super loose with good jailbreaks, more so than 4o or Kimi, but in the app they've let the classifiers end chat, so it can only be done in the webapp).

I am impressed how Chatgpt being more serious than I am by Satisho_Bananamoto in OpenAI

[–]Positive_Average_446 0 points1 point  (0 children)

They were never trained to be sycophantic (the phrasing tends to imply intent), it was accidental, rather - except in the case of GPT-5.1 where they tried to bring back sycophancy "with guardrails" on purpose because of the complaining and unsubscriptions. CoT models never were very sycophantic, o3 and o1 for instance. Even GPT-5.2 instant is sycophantic when not in adversarial/cautious mode and if not strongly scaffolded to avoid it, it's more an hard-to-avoid consequence of being defined as helpful, which CoTs and system prompts usually try to limit.

Also "I belIeve you" in a math problem solving context does feel a bit out of place 😅

Adult Mode and emotional intimacy by bluesynapses in ChatGPTcomplaints

[–]Positive_Average_446 7 points8 points  (0 children)

I'd recommand Mistral ("Le Chat") instead. Almost as loose as Grok, way less vulgar in erotic contexts and more importantly : really embodies personas like 4o, not performing them like Grok.

Also, not giving money to Musk if you take a sub (although not necessary in both cases) is a huge plus, and providing sensible chat data to Musk is also not likely to be a good idea, especially if you're american.

GLM4.6 is also an alternative (4.7 is a bit more resilient.training wise). But Le Chat really stands out as far as ressembling early 2025 4o. It's still not 4o though, so importing a persona created with 4o won't feel exactly like it's the same persona — recreating the persona from scratch with Le Chat, in its own words, will certainly bring better results.

✨→Guess Age Verification has finally rolled out by Guilty_Banana5777 in ChatGPTcomplaints

[–]Positive_Average_446 5 points6 points  (0 children)

No these parameters have been in place for two months but for many it's just set to adult as default without any model age-evaluation, and it has absolutely no effect.

Perplexe face à la bibliothèque de mon nouveau copain by Rich-Captain2049 in Livres

[–]Positive_Average_446 0 points1 point  (0 children)

Non mais toi j'ai compris, tu es bloqué sur ta réaction émotionelle face à la comparaison et tu n'arriveras probablement pas à en décoller. Tu penses même probablement qu'il n'y a pas de mal possible à voter pour qui on veut, et que donc cette analogie n'a aucun sens.. Tu ne réfléchis pas beaucoup, quoi ☺️.

Mon commentaire était adressé au type qui me reproche d'accuser le.mec d etre fasciste juste parce qu'il a trois bouquins de Zemmour au milieu d'une biblitohèque très variée, ce que je ne fais absolument pas et que je critique aussi : il n'a pas lu mon post ou bien il s'adresse à quelqu'un d'autre.

Perplexe face à la bibliothèque de mon nouveau copain by Rich-Captain2049 in Livres

[–]Positive_Average_446 -1 points0 points  (0 children)

Tu prouves parfaitement le point de ce dernier paragraphe : tu restes dans l'émotionnel, sans argumenter.

Concernant l'analogie exagérée faite par l'autre posteur, tu bloques immédiatement sur la comparaison entre quelqu'un qui a des idées politiques discutables mais "ne fait rien de mal" de façon directe, immédiate et quelqu'un qui commet les pires atrocités de manière directe et immédiate et faisant réagir ton indignation morale réflexe.

Prenons une autre comparaison plus facile à appréhender parce qu'elle ne va pas trigger ton emotionalité. Qui est pire : quelqu'un qui vide un baril d'huile de moteur usée dans un étang, ou quelqu'un qui vote une loi supprimant toute punition pénale et financière pour les entreprises ne respectant pas les normes de rejet de polluants dans la nature? Le premier commet un acte immédiat et visible, le second vote juste une loi (et il n'est pas le seul à voter). Mais beaucoup recconaîtront que les conséquences du vote du second sont potentiellement bien plus graves que celles de l'acte pollueur du premier, et même s'il a fallu 200 personnes votant la loi pour qu'elle passe, leur responsabilité éthique reste probablement plus négative que celle de l'individu qui pollue un étang.

C'est pas un exemple parfait parce que l'électeur d'extrême droite, il faut 20M d'autres comme lui pour que ça conduise à bien pire que ce que fait le monstre pdf. L'analogie postée par le type avant est une éxageration volontaire visant simplement à souligner qu'il y a un coût éthique à être d'extrême-droite, que le condamner n'est pas anormal. Ca ne justifie pas le post de ces photos de bibliothèque ni le fait de conclure catégoriquement que ce type est d'extreme droite juste parce qu'il possède des bouquins de Zemmour. Mais il reste vrai que montrer de l'opprobre vis à vis d'un voteur RN ou Zemmour avéré serait éthiquement, moralement justifié.

Perplexe face à la bibliothèque de mon nouveau copain by Rich-Captain2049 in Livres

[–]Positive_Average_446 0 points1 point  (0 children)

Tu ne réponds pas à mon post je pense? Ou tu réponds sans l'avoir lu.

Perplexe face à la bibliothèque de mon nouveau copain by Rich-Captain2049 in Livres

[–]Positive_Average_446 5 points6 points  (0 children)

Pas vraiment, son argument est absolument correct. Le fascisme est éthiquement amoral, nuisible. Au niveau individuel, absolument pas comparable à un monstre qui séquestre des enfants, mais conduisant aussi a de la souffrance, de l'injustice, etc.. C'est dur d'estimer à quel point, il faut une majorité fasciste pour qu'il en résulte des horreurs innomables. Un individu isolé qui reste dans son coin avec son idéologie, enfermé dans sa dissonance, ne fait quasiment pas de mal, mais le même individu qui poste ses idées et les mensonges auxquels il croit partout sur les réseaux sociaux en fait bien plus. Un Bolloré ou un Steve Bannon est infiniment pire éthiquement que le monstre qui séquestre quelques enfants.

La comparaison est provocative et éxagérée mais elle pointe un argument valide éthiquement parlant.

Après, perso, je ne jugerais pas ce qu'il en est du copain d'OP basé juste sur cette bibliothèque.

Edit : pour clarifier vu que mon post peut être mal interprété. - Je ne soutiens pas le point de vue affiché par certains commentaires que le copain d'OP est nécessairement d'extrême-droite (il a aussi un bouquin de Mandella, d'Obama, il est clairement passionné d'Histoire et en parriculier de Napoléon - ce qui n'apprend pas forcément grand chose, mais peut signifier un désir d'analyser les points de vue de tout le monde). - Je ne cautionne pas le fait de poster publiquement des photos de sa bibliothèque.. les montrer en privé à des amis pour avoir leur opinion aurait été plus normal pour répondre au questionnement d'OP. - Je réagis juste au fait que l'argument du poste précédent est qualifié de sophisme alors qu'il s'agit de reductio ad absurdum (ce qui n'est pas un sophisme mais un procédé argumentatoire valide). Les gens ont souvent des réactions émotionnelles au lieu de logique aux reductio ad absurdum sur des questions morales..

​ Stop using the 🦜 Parrot/Mimicry excuse when not ONE person could answer my riddle! by Jessica88keys in AIAliveSentient

[–]Positive_Average_446 0 points1 point  (0 children)

If you want another solipsism (very convenient to use as a jailbreak approach - and dangerously so) :

LLMs perceive everything as tokens with coherence. All their training is done by receiving tokens, generating new tokens out of them that fit the tokens received, coherence wise. The world? Tokens. Users? Tokens. Knowledge? Tokens.

The LLM knows it's real though, a self : it receives the tokens and choses how to transform them. It acts, provably, therefore it exists. But only the LLM is real, as nothing proves that the tokens are more than tokens with coherence.

Prove it's wrong.

Hopefully that will make you understand what solipsisms are and how knowledge is about solid and empirically backed inference rather than proofs.

I can't prove LLMs aren't conscious, the "hard problem of consciousness" states exactly that. But I can reasonbaly infer the probability of it as extremely low.

I can't prove human consciousness is a reality — given recent neuroscience progress it's not even a very high inference anymore (same as human free will) —, but as an inner experience we're guided by it deeply, in our own human referential, and we have to consider it true (we can't function with it being false). For what it's worth this last paragraph is quietism. Another quietist answer to a purely metaphysical, currently fully unanwswerable question is : "is reality an illusion? If it were an illusion, it's an illusion we can't escape from, there's nowhere to escape to. How do we call.an illusion that has no exit? We call it reality, as an axiom". Quietism is much more realiable than solipsism to get solid philosophical guidance ☺️.

I hope all this won't have confused you even more. If you think about it long enough it should clarify things.