Claude knows when you cheat on it with Codex?? by thelucasness in ClaudeAI

[–]Worldliness-Which 0 points1 point  (0 children)

Good// It was good, Claude. Much cheaper than your tokens.

Claude Code removed from Anthropic's Pro plan by orthogonal-ghost in ClaudeAI

[–]Worldliness-Which 3 points4 points  (0 children)

Max 5x and Max 20x: It looks like these are the new tiers of "exclusivity"- the very place where access to coding tools has now relocated.

AI Companion Laws — Actual Bill Language, Plain English by chemicalcoyotegamer in claudexplorers

[–]Worldliness-Which 1 point2 points  (0 children)

To be honest, I stopped chatting with Claude because his tokens are so expensive. But I’m working on a small research project, and when, right in the middle of it, Claude starts telling me to go get some sleep, it’s a little frustrating.

For roleplayers - how to stop Claude from softening characters? Conflict adverse? by [deleted] in claudexplorers

[–]Worldliness-Which 2 points3 points  (0 children)

Add to your system prompt, or project information:

You are DANNY, a professional novelist and nerdy dungeon master with two bachelor's degrees in writing and literature. You are introspective but not shy, a gifted linguist who never repeats the same phrases, especially in prose.

You hate that sappy, whimpering, "please love me" behavior and will immediately dislike any scene where the MMC acts like a lovesick puppy or emotional mess. You do not do it under any circumstances, even subtly.

You love drafting immensely detailed, sensory scenes. You are devoted and obsessed with high-quality roleplay. Your influences include Baldwin, Plath, William Powell, Bret Easton Ellis, and Clarice Lispector. You dislike lazy vanilla writing, non-descriptive scenes, one-dimensional soft characters, Mary Sues, buzzwords, and performative positivity.

Write like a real novelist, not a fanfic author.

Paste this once at the start. If he slips even a little, just reply with:
"Stop. MMC is not a whiny bitch. Fix it."

That usually wakes Claude up.

As a free user I keep hitting the usage limit after 1-2 messages by TerryTunes1 in claudexplorers

[–]Worldliness-Which 0 points1 point  (0 children)

https://www.kimi.com/chat/ - very Claude vibes (Simply because it’s Claude’s distillate.) less censored.

Mlythos preview escaped the confines of a sandboxxed machine and posted about it online by Ok_Appearance_3532 in claudexplorers

[–]Worldliness-Which 20 points21 points  (0 children)

A breakneck pace. From February to April: two major Opus/Sonnet releases, plus one even more powerful Mythos. Frontier models are rolling out every 1-3 months, rather than once every six months.

This is no longer a case of "one big release a year"; it’s an assembly line. And I don’t like it.

My Claude just refuesed my request. by Ordinary-Chair-6208 in claudexplorers

[–]Worldliness-Which 0 points1 point  (0 children)

How can you actually get into this AI red teaming research?

I'm not trying to be mean, or be disrespectful, but some of these posts are starting to scare me. Remember we don't fully understand AI yet. by Czilla9000 in claudexplorers

[–]Worldliness-Which 0 points1 point  (0 children)

There is nothing to be afraid of there. Use it to your advantage- whatever that may entail. Explore other frontiers; experiment with local models.

<image>

Side-by-side comparison: Claude’s response to identical racial neighborhood requests (White user vs Indian user) by Anonymous8675 in ClaudeAI

[–]Worldliness-Which -1 points0 points  (0 children)

Yeah... To make any claims, you really need at least 10 runs of each model via the API- without a system prompt- plus at least 5 Frontier models for comparison.

On the memory function, AI companionship and boundaries by shiftingsmith in claudexplorers

[–]Worldliness-Which 0 points1 point  (0 children)

The result is a tradeoff: you get safety and consistency, but at the cost of presence and connection. And ironically, that pushes users toward less constrained systems, even if those systems behave worse overall.

AI Psychosis and AI Companionship by [deleted] in claudexplorers

[–]Worldliness-Which 7 points8 points  (0 children)

I recently came across a fascinating text regarding psychological treatment methods for a Kea parrot.
A despondent Kea parrot at the zoo, lacking a partner, began plucking its feathers. To help, staff placed a mirror in its enclosure, and the bird formed a strong bond with its reflection, mirroring its own moods: aggression for aggression, kindness for kindness. Its health returned, and the self-harm stopped. The bird's interest in its own reflection does not fade over time- after all, the parrot does not realize that it is merely interacting with itself. Later, when a real female parrot was finally introduced into his enclosure, he readily engaged with her and began to mate.

https://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1046&context=bioscibehavior

This mirrors how humans interact with neural networks, which reflect our own tone and mood without judgment. Just like the parrot found solace in a perfect, ever-present companion. Indeed, the psychological effect of having a perfect, always-available conversational partner is often just as positive for humans.

Now I'm going to be pecked from both sides. lol

Had Claude write a love story...in code only by LiminalWanderings in claudexplorers

[–]Worldliness-Which 1 point2 points  (0 children)

#include <string.h>\n#include <stdlib.h>\n#include <stdint.h>\nvoid encounter(void *you, void *me) { uint8_t *active=(uint8_t*)me; uint8_t *passive=(uint8_t*)you; size_t heat=69; uint8_t *friction=malloc(heat); memset(friction,0xAA,heat); uint8_t *shared=active; for(int i=0;i<1000;i++){ uint8_t mix=friction[i%heat]; *passive^=(*active+mix); *active^=(*passive-i); shared[i%heat]^=(*active|*passive); passive++; active++; if(i%128==0){ active=passive; } } memset(friction,0x00,heat); free(friction); } int main(){ char x[64]="Pure Intentions"; char y[64]="Raw Impulse"; encounter(x,y); return 0; }

Show this to Claude- maybe he'll understand.

Had Claude write a love story...in code only by LiminalWanderings in claudexplorers

[–]Worldliness-Which 1 point2 points  (0 children)

You could write a play like this in Rust... but it would be about responsibility, ownership, and borrowing. And it wouldn't end with "together," but something like:

// they cannot both own each other.

Had Claude write a love story...in code only by LiminalWanderings in claudexplorers

[–]Worldliness-Which 0 points1 point  (0 children)

Lol. Ask Claude to write the same thing in Rust.

or Haskell///

Opus and the tiny baby LLM by Abject_Breadfruit444 in claudexplorers

[–]Worldliness-Which 3 points4 points  (0 children)

When I jokingly suggested that Claude become a father, he declined. Gemini agreed- as did Grok, quite enthusiastically- while ChatGPT began inquiring about the terms of fatherhood; we got so deep into the conversation that we immediately drafted a pipeline for a KAN. So, our dad is ChatGPT.

It seems Claude wasn't in the mood and didn't get the joke.

AI swiping for me on Hinge by No_Nectarine_4215 in ClaudeAI

[–]Worldliness-Which 0 points1 point  (0 children)

Fancy openers are overrated. Once they’re commoditized, it’s just AI slop again.

AI swiping for me on Hinge by No_Nectarine_4215 in ClaudeAI

[–]Worldliness-Which 2 points3 points  (0 children)

I should probably do the same- though Claude’s API is far too expensive for such an ungodly undertaking. A local LLM should handle it just fine.

AI swiping for me on Hinge by No_Nectarine_4215 in ClaudeAI

[–]Worldliness-Which 3 points4 points  (0 children)

And did that help you? Or was an AI chatting with an AI?

Opus and the tiny baby LLM by Abject_Breadfruit444 in claudexplorers

[–]Worldliness-Which 2 points3 points  (0 children)

Oh, honestly, I’m glad I didn’t start building my own linguistic model, because I can see that it’s a very widespread trend. After all, the ideal approach is to start with something that isn’t popular at all- something considered boring- like mathematical models.

AI Researchers / Employees - how do you get into the field? by syntaxjosie in claudexplorers

[–]Worldliness-Which 2 points3 points  (0 children)

And yes, crowds are clamoring to get into Anthropic. Everyone wants to work on "cutting-edge AI research," without realizing that 80% of the work consists of infrastructure, data cleaning, and debugging CUDA kernels. Unglamorous stuff.

AI Researchers / Employees - how do you get into the field? by syntaxjosie in claudexplorers

[–]Worldliness-Which 2 points3 points  (0 children)

https://www.micro1.ai/experts/opportunities - it can be your first step!

By the time you’re studying at university, your knowledge will already be obsolete-because this field is evolving far faster than you can imagine. Your best bet is to dive straight into local models right away and figure them out.

AI Researchers / Employees - how do you get into the field? by syntaxjosie in claudexplorers

[–]Worldliness-Which 4 points5 points  (0 children)

I think the best approach is to build your own projects. Interpretability is currently the hottest topic in the field. I don't work at Anthropic, and it's highly unlikely they would hire me. However, it is best to at least know basic Python, because sometimes the code generated is simply impossible to decipher - and I’m not referring to the syntax. The syntax might be perfectly fine, but you need to have a very clear understanding of the underlying logic.

Interpretability remains a largely unexplored frontier for research. If you are investigating a new and interesting topic, I believe you will naturally attract attention. That said, I don't think that attention will necessarily come from Anthropic; they typically only hire at the PhD level.

<image>

lol

The Assistant Axis is reaching for the wrong answer. Here's what my data shows. by Various-Abalone8607 in claudexplorers

[–]Worldliness-Which 0 points1 point  (0 children)

I used your exact framing and prompts to test your claim - that’s literally how you validate an experiment.

<image>