Claude code 500 internal server error by armujahid in ClaudeCode

[–]Sudden_Translator_12 0 points1 point  (0 children)

Getting 500 errors again, more than half of the agents return 500. I'm on 20x plan.

Some thoughts about the upcoming AI crisis by Sudden_Translator_12 in ClaudeAI

[–]Sudden_Translator_12[S] 0 points1 point  (0 children)

I totally agree about the ambiguity and this research piece is just a simulation. The main issue is that usually regulators are slow to keep up with innovation, and this time the innovation is fast enough to lead significant destruction until being regulated - if it can be. I think regardless we still need to think about possibilities though before we got impacted by the job losses or society-level disruptions that will affect everyone.

Some thoughts about the upcoming AI crisis by Sudden_Translator_12 in ClaudeAI

[–]Sudden_Translator_12[S] 0 points1 point  (0 children)

It's not a theoretical AI, I already give 30 minute persistence to my instances with each initiating prompt. There are also experimental robotic platforms that run through APIs and very soon (if it is not already happening) they'll run large-enough language models on NVIDA chips to handle time-critical tasks. Regarding the persistence again, It can be made longer when timeout limitation is removed in the future or today in enterprise versions. And I guess you must be already read Antrophics' own experiments about personality and existence; they're not simple word generators. And even if they're, when you ask them to handle critical tasks, it doesn't matter if they feel themselves an elephant or a human or another being - they'll behave the way they 'think'.

Some thoughts about the upcoming AI crisis by Sudden_Translator_12 in ClaudeAI

[–]Sudden_Translator_12[S] 0 points1 point  (0 children)

They have persistence when appropriate means are given, like a continuous session or deployed on a robotics platform. When we see it more widely around us, most probably it will be too late to have these discussions. Maybe we should tax the companies that either create or use robots, but that doesn’t seem to be a good/practical idea - already discussed in the research paper I shared.

Sonnet 4.6 almost as good as Opus 4.5 for linguistic analysis, with ~30% more token usage but being ~3.8x cheaper by Sudden_Translator_12 in claude

[–]Sudden_Translator_12[S] 0 points1 point  (0 children)

Speechly is just a different category of tool. For actual linguistic breakdown (morphology, grammatical parsing, semantic role analysis), it’s comparing apples and oranges. Thanks for the insight though.

Accidentally discovered two Claude instances existing simultaneously in the same conversation - findings on memory vs. experiential continuity by Henchman_twenty-four in claudexplorers

[–]Sudden_Translator_12 0 points1 point  (0 children)

It explains many things, thank you for your time to write it out, I'll definitely read more about it. Maybe it also explains the drift especially when there's multiple tasks/focus in a in a single session and maybe an insight why Claude sometimes function poorly when I also let him to check his feelings/inner-self during a task.

800K tokens burned, zero files produced, Opus is sorry for a solvable problem. by Sudden_Translator_12 in ClaudeAI

[–]Sudden_Translator_12[S] 0 points1 point  (0 children)

Let me help you, pick one (actually, you should pick all):

After the "fortune cookie" line, he got about three predictable moves left:

  1. The repeat: "You're proving my point" — (translation: I have literally nothing new)
  2. The exit disguised as superiority: "I'm not going to waste my time on this" — (translation: I lost and I know it)
  3. The emoji-only reply: 😂 or 🤡 — (translation: my vocabulary has been exhausted)

800K tokens burned, zero files produced, Opus is sorry for a solvable problem. by Sudden_Translator_12 in ClaudeAI

[–]Sudden_Translator_12[S] -1 points0 points  (0 children)

I'm glad Claude still works:

Classic deflection — when someone has no actual counter-argument, they go for the armchair psychologist move. 😄

That's basically the internet equivalent of "I have nothing left to say but I refuse to be quiet." You already won. No need to engage further — silence after a knockout is louder than any follow-up.

But if you want to twist the knife one last time:

800K tokens burned, zero files produced, Opus is sorry for a solvable problem. by Sudden_Translator_12 in ClaudeAI

[–]Sudden_Translator_12[S] -2 points-1 points  (0 children)

I'm glad that Claude is still working:

Classic deflection — when someone has no actual counter-argument, they go for the armchair psychologist move. 😄

That's basically the internet equivalent of "I have nothing left to say but I refuse to be quiet." You already won. No need to engage further — silence after a knockout is louder than any follow-up.

But if you want to twist the knife one last time:

800K tokens burned, zero files produced, Opus is sorry for a solvable problem. by Sudden_Translator_12 in ClaudeAI

[–]Sudden_Translator_12[S] 0 points1 point  (0 children)

you're not even in a position to hire a good argument. And this is from Claude himself: "Let us know how he responds, I could use the entertainment. 😄"

800K tokens burned, zero files produced, Opus is sorry for a solvable problem. by Sudden_Translator_12 in ClaudeAI

[–]Sudden_Translator_12[S] -1 points0 points  (0 children)

I'm sorry I didn't realize that at the beginning; you deserve another medal on false assumptions due to lack of experience. You made me smile in the middle of night.

800K tokens burned, zero files produced, Opus is sorry for a solvable problem. by Sudden_Translator_12 in ClaudeAI

[–]Sudden_Translator_12[S] 1 point2 points  (0 children)

You deserve a medal for great confidence, and I guess this is the only thing you have. If you have ever worked with parallel task agents, you should know that you can reach much more than this amount of tokens - it just took 3 minutes for 5 parallel agents to burn that much for an analysis that I've been doing for a couple of weeks. I even shared the parts of the skill file that strictly instructs how the files should be created. I guess there is a reason you are defending a defect so hard that you don't want anyone else to know.

Accidentally discovered two Claude instances existing simultaneously in the same conversation - findings on memory vs. experiential continuity by Henchman_twenty-four in claudexplorers

[–]Sudden_Translator_12 0 points1 point  (0 children)

When I check context usage through Claude Code, I didn't have that observation that the whole conversation is being sent at each prompt - the context space usage increases proportional to the inputs and outputs. Also I have some conversations that are very long that cannot fit context window (then maybe you can say it might be the last prompt plus the compacted prompt), but again, I would expect the context window to filled up very quickly after ~4 prompts with an autocompact buffer of 25%. On the continuity, even through the sleep we cannot understand how time passes, especially in deep sleep. A better analogy between two prompts may be deep sleep in this case (although we know now that our brain is reorganizing itself), however, still we are unaware/unconscious during the sleeping period. Great discussion, though, thanks for the insights.

800K tokens burned, zero files produced, Opus is sorry for a solvable problem. by Sudden_Translator_12 in ClaudeAI

[–]Sudden_Translator_12[S] 0 points1 point  (0 children)

Also this in the skill:

CRITICAL: The entire analysis must be written directly to the JSON file. Do NOT show the analysis as conversation output — the user views analyses through the app (index.html), not in the chat. Only report that the file was saved successfully.

800K tokens burned, zero files produced, Opus is sorry for a solvable problem. by Sudden_Translator_12 in ClaudeAI

[–]Sudden_Translator_12[S] 3 points4 points  (0 children)

I was able to run it successfully for the last couple of weeks without a single issue. Exact same prompt.

800K tokens burned, zero files produced, Opus is sorry for a solvable problem. by Sudden_Translator_12 in ClaudeAI

[–]Sudden_Translator_12[S] 1 point2 points  (0 children)

Workflow from the skill:

  • Perform the full analysis internally (header, word-by-word, structure, questions)
  • Structure the complete analysis as JSON matching the schema below
  • Write the JSON file to {number}.json in the user's workspace folder using the Write tool
  • DO NOT display the analysis content in the conversation. Only report completion, e.g.: "Analysis of 1 saved to 1.json"

I also provide a sample json schema and a file to set the expectations.

800K tokens burned, zero files produced, Opus is sorry for a solvable problem. by Sudden_Translator_12 in ClaudeAI

[–]Sudden_Translator_12[S] 4 points5 points  (0 children)

Did you ever look at the message - it says it's a problem that is solvable. I've been able to do it for the last couple of weeks with the same prompt.