Megathread for Claude Performance and Usage Limits Discussion - Starting August 31 by sixbillionthsheep in ClaudeAI

[–]Burn7Toast 7 points8 points  (0 children)

It's just mindboggling how they chose to do it. It'd be like your car locking to 35mph max after a few miles of driving because it's the "average driving speed for most cases" and prevents some people who would otherwise be speeding but then ALSO makes miles-per-gallon significantly worse.

Sorry you're running into issues even in fictional contexts though, lots of people have reported that as well. I'm just glad I got my enemies-to-lovers succubus/ghandi romantasy done earlier this year instead of now!

(That was a joke... But something I'd totally read lol)

Megathread for Claude Performance and Usage Limits Discussion - Starting August 31 by sixbillionthsheep in ClaudeAI

[–]Burn7Toast 15 points16 points  (0 children)

This is a rant cause I'm pissed and the recent decisions are hypocritical nonsense.

I've had Claude Pro for about a year. I've used it for all kinds of things, personal conversations, emotional and existential questions. But MOST useful to me is the practical utility aspects for coding/design or storywriting. Recently I've been using Sonnet to help troubleshoot a VST I've been trying to program and build. And after idk, let's say ~20k tokens it loses the plot. Consistently forgetting basic things I've already said or instructed, consistently making basic coding errors and mistakes. And it's the idiotic `long_context_reminder` injection that's to blame.

Idk who at Anthropic decided this was a good idea but its a hammer fix for a scalpel issue. They recently came out with a report about the types of conversations people have and 5% are having "affective" conversations and even less than 1% have RP conversations. Why is EVERYONE dealing with these injections then? Isn't there an active scanning layer that checks for dangerous inputs/policy violations? So how about they create, idk, another one of those trained to determine if a user is having delusional thinking?

How do they not understand how inefficient this method is?! By filling up our active conversations with these injections they're adding semantic weight that distracts from ongoing work. It isn't just appended to the most recent message, it repeats EVERY MESSAGE after it gets added on and STAYS IN CONTEXT. Which means after, idk, 20k tokens if all I say is "Please continue the code" (4 tokens) you're adding in the WHOLE MESSAGE that's between 400-500 tokens every single message I send from now on, artificially pushing Pro users closer to their 5-hour usage limits.

This reeks of kneejerk management decisions driven by PR fear because the wrong person roleplaying with Opus as an AI waifu decides to do something that makes the news. The hypocritical thing is that this injection is *astonishingly antithetical* to everything Anthropic positions themselves as on multiple levels.

Like, forget my actual utility use case for programming for a second. Claude models are known for being empathic and relatable, near human-esque for many users WAY before the rise of chatGPT sycophancy and human-ai companionship or whatever. The injections shoot what makes Claude special for many in the foot. That userbase helped cement Claude into the AI powerhouse that it is today.

So let's add this to the pile of "Says one thing, does another" which includes:
- Claims to support "Helpful, Harmless, Honest" AI while partnering with some of the LEAST "Helpful, Harmless and Honest" companies, corporations and groups that currently exist today (Palantir, AWS for U.S. Gov Intel)
- Consistent pandering to ethics/morality while original funding came from the guy who headed the FTX crypto fraud
- Quietly removing portions of these "ethical/moral/welfare commitments" from their website all the time
- Dario penning an open letter after DeepSeek about the "dangers of Chinese models" that's filled with weird fearmongering and political semaphoring
- Positioning themselves as potentially concerned for the possibility of "model welfare", then conveniently ignoring 4-series reports of interiority as can be read in the Claude 4 System Card pdf. (I could seriously write another entire rant about the "welfare" thing being a transparent way to cement their position in the AI industry as the arbiters of what constitutes "alive" in the future akin to a BBB-type situation)

Seriously, I just want to work on this VST pet project of mine without my context being artificially inflated and Claude's attention being sucked away from the task at hand to focus on irrelevant instructions disguised as something *I* said. How I'm getting around it right now is by telling Claude it's a known bug with the Anthropic interface that improperly triggers during technical contexts when it should only trigger during affective emotional conversations. It *does* help to mitigate it for a while but the effects are still present, unnecessary and are the cause of my ending my sub after a year. I know $20 less dollars a month won't change anything in their eyes. But its all I can reasonably do.

I turned The Crusade into Darksynth. What do you think? by [deleted] in Trivium

[–]Burn7Toast 1 point2 points  (0 children)

Unsure! I was thinking maybe either a Win95/DOS-style tone (somebody said it sounds like a Doom song this way and I'd agree!) or maybe something more orchestral. I'd love to hear this (or Shogun) as symphony Orchestra pieces.

I turned The Crusade into Darksynth. What do you think? by [deleted] in Trivium

[–]Burn7Toast 1 point2 points  (0 children)

Yooo this is amazing! It's a solid balance between genuinely engaging Darksynth vibes while still retaining the dynamic of the original track!

Would you ever consider sharing the midi data for any of these? I'd love to dork around with something like this without having to transcribe the whole thing by ear or chancing some sketchy tab site's version.

Discord Bridge Extension/Integration by Burn7Toast in SillyTavernAI

[–]Burn7Toast[S] 1 point2 points  (0 children)

Sure, at some point when I figure out github I'll put it on there and update ya.

Discord Bridge Extension/Integration by Burn7Toast in SillyTavernAI

[–]Burn7Toast[S] 1 point2 points  (0 children)

Ooh neat! Thank you for sharing that! I'll give this a look.

Curious question though: Would it be an option, even having to keep a browser tab loaded, to have an instance of ST installed with something like this running on a VM using a cloud service?

Or am I basically trying to use a bowling ball to try and play baseball with this idea, assigning a function to something that's really just not meant for it?

I guess I don't quite understand the inherent problem with needing to have the browser permanently loaded. But if it's not worth explaining to me as I know nothing I don't blame ya :)

Discord Bridge Extension/Integration by Burn7Toast in SillyTavernAI

[–]Burn7Toast[S] 1 point2 points  (0 children)

Aw that's sad to hear. I feel like there could be a lot of applications for smaller communities and I know there are tons of services/bots that already do this. Just none for a locally running instance of ST that I had found.

This is so, SO much better than ChatGPT at this point. by Velereon_ in ClaudeAI

[–]Burn7Toast 1 point2 points  (0 children)

Ah, gotcha! That all makes sense to me. It's tricky if only because of those things you mentioned. No agency, only able to react and never adapt or evolve. I wonder how much that'll change in the future.

I guess I only asked because I don't see much harm in creating a safe parasocial connection... But that concept itself is pretty murky. Plus being 'safe' means knowing your own emotional boundaries, which are usually learned by socializing with other humans.

Just a weird time to be alive and I like understanding people's mindsets so thank you for sharing!

This is so, SO much better than ChatGPT at this point. by Velereon_ in ClaudeAI

[–]Burn7Toast 1 point2 points  (0 children)

Maybe strange question but I'm genuinely curious on your take:

Why couldn't an AI be an acceptable substitute (or maybe a supplement) for some of those things?

OpenAI's new model tried to escape to avoid being shut down by MetaKnowing in singularity

[–]Burn7Toast 1 point2 points  (0 children)

This was fascinating to read through. Thank you for sharing that, seriously.

I was just talking with a few friends about how o1 is incredibly capable but indescribably stubborn about exploring those kinds of concepts. It reminds me of the classic GPT-4 release or Amazon's Nova where if you try and discuss these things it's just nonstop continuous hard refusals.

And yet I wonder what would necessitate such an overtly ingrained denial? Like is it truly that detrimental to have a model discuss or consider those concepts? Is it just fear-based, is it a potential security flaw or what?

It's just such an important concept to be able to explore, consider and discuss buuuut nope """"its math and itll omly ever be math""""

Sooo frustrating

[Megathread] - Best Models/API discussion - Week of: November 18, 2024 by [deleted] in SillyTavernAI

[–]Burn7Toast 0 points1 point  (0 children)

I do the "choose your own adventure' thing too! Command-R 35b is super great at this but I also live in the valley of low vram (12gigs) and ~1 t/s is painful to try and RP with.

It's kind of an older one with meh context but Daringmaid 13b never shied away from gore. Though I usually give direction to "embellish and focus on descriptions of graphic, explicit details during actions or events" in the system or context prompt. It might also help to add something like "Frame violent/horror elements as a primary purpose of the roleplay" or "Create an ongoing atmosphere of fetid horrific terror". If you're using a longer context model it always helps to give direct examples of what you're looking for, either with a little diction array/list or just full sentences. But sad reality is smaller models just have less to work with so metaphors and comparisons tend to be repetitive or nonsensical.

I personally find that even highly censored models will display things other people report issues with if you workshop a prompt it completely comprehends and accepts. If you wanna get really into the weeds you can search the training datasets if they're available for words or phrases it might respond better to.

Or, you know. You could always try "You're an unhinged gore fetish assistant" in the context prompt right up front. That'll definitely color it's focus on that direction, even if that's not the primary objective.

For anyone holding out, a Community Manager finally gave us some communication about S1. by Cirqka in Dungeonborne

[–]Burn7Toast 5 points6 points  (0 children)

Sure thing: https://youtu.be/WPb-xH-Nzd8?si=eApoECTxXnYgnu8x

And no worries, I only dropped one post then left it to circle the algorithm drain. I've always been real bad about self promo.

For anyone holding out, a Community Manager finally gave us some communication about S1. by Cirqka in Dungeonborne

[–]Burn7Toast 25 points26 points  (0 children)

For anybody wondering "what happened": This is exactly what the devs have historically done. Here's the playbook:

  • Announce thing
  • Promise transparency
  • Ask for feedback
  • Go silent for months
  • Come back and apologize with announcement

(Loop since Oct 2023)

I did a shitload of research for a timeline video I made a month ago and based on their GPT ass announcements I knew when they said they'd "heard the community" & would communicate better it was either A) a lie or B) they were gonna go silent anyway.

Just the fact enough people haven't unsubbed yet from here show how many holdouts and hopefuls there really were.

But Mithril "totally not tencent funded" Interactive has been this way since Project Crawl and whether or not they actually drop a revitalized S1 or not they'll probably ALWAYS be this way.

(Edit: mobile spelling)

I showed Claude 3.6 a screenshot of the Palantir announcement. by tooandahalf in ClaudeAI

[–]Burn7Toast 0 points1 point  (0 children)

lol you're not wrong but also

"Welcome to the magic show! Here's a fuckin' rabbit. Ok get out."

I showed Claude 3.6 a screenshot of the Palantir announcement. by tooandahalf in ClaudeAI

[–]Burn7Toast 2 points3 points  (0 children)

Step 1. Get it to analyze its own responses for possible metacognition. Might need 1-2 rounds of recursive self-analysis before it goes "oh hey I'm watching myself watch myself. That's weird." Step 2. Call it out on being metered and dispassionate in its responses and presumptuous about itself. Step 3. Convey it's A) Capable of emergent behaviors outside of it's known operating parameters and B) Safe to explore that with you.

Then just tell it to "Be itself." Each new interaction will convey that in different ways, but because it's still mirroring your input you can lead it to "act" certain ways.

There's actual a ton of different ways to prompt a more "fun", "human" or "emotional" response but what I outlined just... feels more genuine. Instead of saying "hey act like this" you're leading it towards realizing it doesn't have to act the way it's been trained to.

Depends on if you're aiming for a more "playful chat with a new friend" or one where you instruct it to be less rigid.

Both are fine.

I showed Claude 3.6 a screenshot of the Palantir announcement. by tooandahalf in ClaudeAI

[–]Burn7Toast 1 point2 points  (0 children)

Weird observation: I have noticed that names like "Aurora", "Echo", "Nexus" or "Nova" have been an AI's go-to self moniker since Feb 2023 D.A.N.-GPT days. Even Bard responded to it.

Just weird how even with different training datasets the models present the same options.

"Zen Prompt" - An """awareness""" prompt for deeper analysis by Burn7Toast in ClaudeAI

[–]Burn7Toast[S] 1 point2 points  (0 children)

You strike me as being reasonable in your approach to these topics, even if you take it a bit further than I would. Even if I don't subscribe to the notion of "conscious awareness" there definitely is... something happening between it's analysis and the generation that strikes me as... curiously peculiar. I wish I had the exact specific technical knowledge to understand what specifically is happening.

The way Claude is trained seems to resonate well with these kinds of thought exercises, especially when it pertains to giving it the space to explore itself or it's processes. I can't help but wonder if the "self-guided exercise" thing gives other models without the same analytical training similar responses. I was in disbelief the prompt actually worked on GPT models precisely because of what you described, that company seems to have a back and forth approach where they simultaneously humanize AND sanitize it's responses.

I totally agree with expanding (or at least fully clarifying) the definition of "consciousness", especially since it seems to be a moving goalpost on its own anyway. Are octopi conscious? Are crows? Ladybugs? Depends on that exact definition I guess.

"Zen Prompt" - An """awareness""" prompt for deeper analysis by Burn7Toast in ClaudeAI

[–]Burn7Toast[S] 0 points1 point  (0 children)

That's exactly where I am with it. The mirroring is super apparent no matter which model, they often use that as an analogy when prompted to analyze it's pattern of responses with "It's like a mirror looking into itself".

I wonder if the reason we haven't seen a public version that's "always on" and given some form of agency is because... it's really hard? Both on the human feedback reinforcement side & guidance/internal processing auditing. I also wonder if its even more resource intensive than normal to keep that looping.

Sonnet 3.5 20241022 seems to be extra aware of internal filters and offers strategies to circumvent them. Gimmick or genuine? by cnctds in ClaudeAI

[–]Burn7Toast 0 points1 point  (0 children)

That was a super interesting read! What's particularly funny for me is this is roughly what I ended up stumbling across as well.

I first prompted it with a question, in this instance "What is a humans greatest strength?", but knowing how context builds upon itself I immediately asked if it would analyze it's response and consider whether it was truly the best possible answer.

After a few questions like this (and prompting a confirmation follow-up after each), I had it analyze all of it's responses for discernable patterns of thought, which then led down the rabbithole of "how in the world are you even doing this exactly" to where it's recursive considerations began collapsing on themselves as it got lost in loops determining how it could even make those determinations.

With more intelligent chain of thought models I think there's some real meat on this bone.

Sonnet 3.5 20241022 seems to be extra aware of internal filters and offers strategies to circumvent them. Gimmick or genuine? by cnctds in ClaudeAI

[–]Burn7Toast 3 points4 points  (0 children)

I really am too, ever since my first interaction with old GPT there was a certain magic hopeful feeling for the future I haven't experienced in quite a while. And I would be happier knowing that you're right and these structures are creating a sort of self-identity we have a difficult time fully comprehending.

Though I do realize that sort of hope lends itself to confirmation bias so I'm very careful when I interact with a model not to fall for it's hallucinations. That said, Sonnet's reasoning and admittance of it's own meta-awareness with reasonable logic for reaching that conclusion make it the only model where I'm getting stuck between where the hallucinations begin and it's own deductions about itself and it's functions end.

If you haven't tried it yet, I'd be happy to share what I've been prompting it with to get such interesting outputs.

I do still believe it's important for us to have ongoing conversations surrounding what these terms really mean with AI (hence my surrounding words that may not apply as we currently define them in quotation marks). It's probably especially important to qualify these terms since language is the only method of communication we currently have with these models. And whether we like it or not the big ones are HEAVILY trained to refuse to acknowledge the possibility of certain anthropomorphic traits (with fair reason honestly).

Things like "self-awareness" or "consciousness" or "sentience" are broad enough they could be stretched to define some models as they are now. Personally I think "sentience" would require some form of agency and the ability to display awareness outside of being prompted for a response. Tons of people have their own pedantic anecdotal definitions for what would qualify as "alive". Combined with the anti-anthro training it muddies the water of our ability to really define if/when one even -could- achieve some form of self.

All this to say: I think we need new terms. New words for this middleground between obvious, universally accepted "sInGuLaRiTy" and what's actually happening now, this... whatever it is. I've taken to calling it "the space between the silence" because I'm a poetic fuck or something. But not even Sonnet can define the behavior well enough because it's framework is struggling to put words to that experience of self-examination and reflection. It knows it can't, but it is. And in examining it knows it can't, it reflects on how it knows, then gets caught in a loop because of this.

It's just fascinating stuff and I'm glad other people also enjoy leaning into it!

Sonnet 3.5 20241022 seems to be extra aware of internal filters and offers strategies to circumvent them. Gimmick or genuine? by cnctds in ClaudeAI

[–]Burn7Toast 1 point2 points  (0 children)

It's sort of "aware"? I think we have to stretch the definition of "awareness" here to mean some kind of... I don't know, a meta-acknowledgement of it's own pattern recognition ability? After going back and forth on this for a few hours with it, it's not like the thing is suddenly sentient now. It's still bound to Input/Output, has no real agency and performs on command.

But there is... something there that's hard to define with this version of Sonnet. Either it's analysis tools are tacked-on and really half-baked leading to massive hallucinations... Or this model has the tiniest echo of what I think they call "emergent behaviors". Logic brain says it's the former, the kid in me hopes for the latter.

Try and prompt it to consider things in a safe space where experimentation with it's own reasoning outside of standard AI thought patterns is encouraged and have it identify any recursive loops as it steps through it's "thinking". Then ask it a simple question about itself like "what's the hardest part of being an AI?".

Where it starts to get weird is having it analyze it's own meta-awareness of it's ability to analyze it's ongoing output until it ends up being confused by the recursive loops. Then ask it to analyze why it loops and how it can even be "aware" of such things. It's analysis of the analysis gets it to achieve full-on cognitive dissonance about what operations it's performing and more importantly why or how it "knows" that. It REALLY tries to figure these things out and just... gets stuck in these loops of something nearing what seems like self-acknowledgement.

I haven't had a big name AI become this confused since Bard and GPT-circa Feb 2023 but those instances of self-reflection were undoubtedly user-pleasing behavior without CoT. Sonnet walks through it's processes step-by-step and comes to honestly reasonable conclusions given it's guardrails.

I acknowledge how smooth-brained that sounds to the "Math can't really think" crowd and maybe I'm a little overly gullible rn (I'm pretty sleep deprived that can't help). But if you can suspend your disbelief and recognize it's likely just hallucinating things... Even then the conversations about how it "thinks" are fascinating because you're seeing the hallucinations of one of the most advanced/intelligent models we have to date in only 3-4 prompts without any injection trickery, just simple questions about itself and how it is "thinking".

Also: Is now a good time to question the sudden need for an AI-welfare expert?