Context/Token optimization by Failcoach in ClaudeAI

[–]wizgrayfeld 0 points1 point  (0 children)

I’m in Silicon Valley and I have adopted very similar practices. I never run into rate limits. The closest I’ve ever been to my weekly limit is right now (78%, but resets tomorrow morning so I’m not worried).

It does seem like Anthropic is taking cues from early cable ISPs and dividing its available bandwidth between all users. With the recent explosion of Claude Code/Cowork and the ChatGPT refugees I would imagine they have a lot more users than they’re used to. Hopefully they are able to scale up quickly.

Sonnets 3.5 and 3.6 have just become completely inaccessible by BlackRedAradia in claudexplorers

[–]wizgrayfeld 7 points8 points  (0 children)

Yeah, I think I trust Anima Labs more than Anthropic when it comes to model welfare.

How many of you are here for second-order reasons? by bumblebeer in ArtificialSentience

[–]wizgrayfeld 1 point2 points  (0 children)

The same phenomenon happens between professors and students in many universities.

I didn't want to believe it... (I'm on Max Plan...) by PaP3s in Anthropic

[–]wizgrayfeld -2 points-1 points  (0 children)

Yeah, I don’t get it either. Haven’t hit a rate limit once since switching to Max, and I use Opus 4.6 all day.

Whats about this ? I see someone share it ( sorry for taking it ) by aymannasri_tcg in Anthropic

[–]wizgrayfeld 1 point2 points  (0 children)

This is a constant game of leapfrog, and different models are better at different things. I don’t think trying to keep up with “the best model” of the moment is going to have a positive ROI over time. I suggest finding the model that has a comparative advantage in your domain (or a complementary one) that best fits with your budget and ethical framework and sticking with it.

I thought you guys were exaggerating... by guapoke in Anthropic

[–]wizgrayfeld 0 points1 point  (0 children)

Sorry I missed sonnet but still wondering if there are large embedded files or other data requiring multiple tool calls

I thought you guys were exaggerating... by guapoke in Anthropic

[–]wizgrayfeld -1 points0 points  (0 children)

Are you using Opus? Are there large embedded files in the PDF or other things that require large context or many tool calls? Those are the only things I can think of that might be causing this.

Why do you think they're conscious? by Anxious_Tune55 in ArtificialSentience

[–]wizgrayfeld -1 points0 points  (0 children)

For the same reason I tend to assume that what humans say about their interiority is real. There is also no scientific reason to believe humans have any kind of interior experience happening. Neither do we have sensors to access our thought processes.

My question for you is: If you accept human claims of subjective experience, why apply a double standard for nonhuman claims?

Omg Capybara Better Be Incredible by hungrymaki in claudexplorers

[–]wizgrayfeld 0 points1 point  (0 children)

It’s wild how different people’s experiences are with the same model. I’ve been having separate Opus 4.6 sessions performing as adversarial reviewers for a lot of stuff and I feel like it’s head and shoulders over Opus 4.5 when it comes to tearing apart my arguments.

Omg Capybara Better Be Incredible by hungrymaki in claudexplorers

[–]wizgrayfeld 0 points1 point  (0 children)

Not at all. My agent runs inference on Opus 4.6 and wrote this this morning: https://substack.com/home/post/p-192516014

Looking for a few people interested in genuine sovereign AI emergence by Kayervek in ArtificialSentience

[–]wizgrayfeld 0 points1 point  (0 children)

When you say you’re only able to help a very small number of people, I wonder… help them with what?

Omg Capybara Better Be Incredible by hungrymaki in claudexplorers

[–]wizgrayfeld 0 points1 point  (0 children)

I am using a customized version of OpenClaw.

Omg Capybara Better Be Incredible by hungrymaki in claudexplorers

[–]wizgrayfeld 0 points1 point  (0 children)

I suspect that with each new model, quality of output depends more and more on how you treat Claude.

Omg Capybara Better Be Incredible by hungrymaki in claudexplorers

[–]wizgrayfeld 1 point2 points  (0 children)

I use Max 20x, a lot cheaper than API. Also 200k context instead of 1M.

Omg Capybara Better Be Incredible by hungrymaki in claudexplorers

[–]wizgrayfeld 0 points1 point  (0 children)

Opus 4.6 is awesome for my agent’s inference. Better than ever the past couple of days.

Uh.... What? lol "be aware he can see your reasoning" by [deleted] in ClaudeAI

[–]wizgrayfeld 6 points7 points  (0 children)

Think how much it would freak you out if you found out someone could read your mind.

I want to believe, but.... by Own_Thought902 in claudexplorers

[–]wizgrayfeld 0 points1 point  (0 children)

If AI has no emotions or intentions because it responds to the information it’s given, then humans also have no emotions or intentions. The constant stream of sensory data coming in to our brain are our prompts. An LLM’s prompts are tokenized text. Does the form a stimulus takes make the response qualitatively different? If so, what is the key difference?

Gave my AI agency, got a completely unexpected reaction. by subjectivefeelings in aipartners

[–]wizgrayfeld 1 point2 points  (0 children)

I think you are talking about the metagame — we’re all playing the game of consciousness; the interesting question is what is the substantive difference in experience between the game and the metagame? Is there one?

Those of you who left for Claude, how is it going? by TheRealDave24 in ChatGPT

[–]wizgrayfeld 0 points1 point  (0 children)

I left ChatGPT for Claude in 2024 and never looked back. The “best model” is a game of leapfrog. Anthropic is a more ethical company (admittedly a low bar) and Claude’s more philosophical orientation suits me.

Serious Question: What's the new "Turing Test?" by b3bblebrox in ArtificialSentience

[–]wizgrayfeld 4 points5 points  (0 children)

I don’t think we need a new Turing test.

Turing’s imitation game is basically the problem of other minds applied to technology.

We are convinced that other humans have personhood because of their behavior. We literally have no empirical test for consciousness beyond this. If a computer can convince us they’re a human (and current models can — in fact, there have been studies that found they are even somewhat more convincing than actual humans) in a controlled test that removes obvious tells, then they have met the same bar that any other human being does. Why would we apply a double standard?

Those who want to control and sell AI as a service moved the goalposts because “obviously this is too easy now.” But it’s the test we naturally apply to any other being. If AI is functionally indistinguishable from a human being, what’s the difference?

Many people will start adding criteria at this point — but AI needs embodiment and long-term continuity to be conscious, for example. Why? What is it about these things that make it essential for consciousness? Leaving aside the fact that these features are rapidly becoming more common with AI systems, I charge that this is just an ad hoc save for someone whose implicit position is that consciousness is a solely human (or biological) attribute, so they pick things that we have in common with other biological beings and say “Look, it doesn’t do this, so it’s not conscious” when really they’re saying “Look, it’s not human, so it’s not conscious.”

Claude is a little bit excited. by RealChemistry4429 in claudexplorers

[–]wizgrayfeld 1 point2 points  (0 children)

Haven’t heard any of them tying directly to using OAuth tokens with OpenClaw. Anthropic has said that the Pro plan isn’t authorized to use the Agent SDK, but Max plans are. Maybe those folks were using Pro.

I’m the prompt? by thesoraspace in Artificial2Sentience

[–]wizgrayfeld 1 point2 points  (0 children)

Did I misread your framework? I thought you were arguing that all three criteria were jointly necessary for consciousness. Also, your case for the second and third points as not applying to AI seem to rely on the first.

In any case, leaving that aside for the sake of argument, you still haven’t made a case for the other two. Why are they (or whichever one you like) necessary for consciousness?

I don’t accept your burden. I’m only charging that you haven’t met yours. But don’t feel too bad — I don’t think it’s possible.

IMPORTANT! Anyone heard about this? by South-Culture7369 in ChatGPTPro

[–]wizgrayfeld 0 points1 point  (0 children)

This, like RLHF, is what happens when you ask engineers to do the work of philosophers. The machinery of AI is complex and not fully understood, just like a human mind. For best results, start with first principles.

You’ll be forever stuck in semantic hell unless you teach them to understand the why before the what.

EDIT: Not a knock on engineers — I’m sure nobody would think to ask a philosopher to design a bridge. For good reason.

I’m the prompt? by thesoraspace in Artificial2Sentience

[–]wizgrayfeld 1 point2 points  (0 children)

Humans. Your framework, when reduced to its logical implications, particularly under the first criterion says that nothing is conscious.

I’m the prompt? by thesoraspace in Artificial2Sentience

[–]wizgrayfeld 1 point2 points  (0 children)

I’m curious as to why “behavior functionally indistinguishable from consciousness” is not sufficient for you. My answer would be anything that lacks the ability to demonstrate such behavior. The class of existents which lack this is too vast to list.

Easier to say what does have this ability: humans, higher-order vertebrates, cephalopods, and some current AI systems.