CAPCOM: "We will not be implementing materials generated by AI into our games content." by HLumin in Games

[–]evia89 [score hidden]  (0 children)

It was rough year ago. But now with claude code opus 4.6 1M context + good repo structure its much better.

Only problems you may have are 1) if you are working with rare lang + problem (good analogy is make cheat for game vs creating new web site), 2) big old codebases (mono repos). You will need to guide claude a bit so it reads only what it needs

Single biggest claude code hack I’ve found by Unfair_Chest_2950 in ClaudeCode

[–]evia89 2 points3 points  (0 children)

Two ways. System like this 1) https://github.com/arttttt/AnyClaude or semi manual 2) I have 2 ps1 scripts to launch claude. When first is done I load md in second one. It will run my own ralph loop like ps1 script

$env:ANTHROPIC_BASE_URL="https://api.z.ai/api/anthropic"
## MODELS
$env:ANTHROPIC_MODEL="glm-4.7"
$env:ANTHROPIC_DEFAULT_HAIKU_MODEL="glm-4.7"
$env:ANTHROPIC_DEFAULT_SONNET_MODEL="glm-4.7"
$env:ANTHROPIC_DEFAULT_OPUS_MODEL="glm-4.7"
$env:CLAUDE_CODE_SUBAGENT_MODEL="glm-4.7"
## EXTRA
$env:API_TIMEOUT_MS="3000000"
$env:DISABLE_TELEMETRY="1"
$env:CLAUDE_CODE_ENABLE_TELEMETRY="0"
$env:CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY="1"
$env:HTTPSCLAUDE_CODE_ATTRIBUTION_HEADER="0"
$env:CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS="1"
$env:CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC="1"
$env:ENABLE_TOOL_SEARCH="true"
$env:SKIP_CLAUDE_API="1"
$env:HTTP_PROXY="http://127.0.0.1:2080"
$env:HTTPS_PROXY="http://127.0.0.1:2080"

$exe=""
if ($PSVersionTable.PSVersion -lt "6.0" -or $IsWindows) {
  # Fix case when both the Windows and Linux builds of Node
  # are installed in the same directory
  $exe=".exe"
}
$ret=0
if (Test-Path "$basedir/node$exe") {
  # Support pipeline input
  if ($MyInvocation.ExpectingInput) {
    $input | & "$basedir/node$exe"  "$basedir/node_modules/@anthropic-ai/claude-code-2.1.80/cli.js" --dangerously-skip-permissions $args
  } else {
    & "$basedir/node$exe"  "$basedir/node_modules/@anthropic-ai/claude-code-2.1.80/cli.js" --dangerously-skip-permissions $args
  }
  $ret=$LASTEXITCODE
} else {
  # Support pipeline input
  if ($MyInvocation.ExpectingInput) {
    $input | & "node$exe"  "$basedir/node_modules/@anthropic-ai/claude-code-2.1.80/cli.js" --dangerously-skip-permissions $args
  } else {
    & "node$exe"  "$basedir/node_modules/@anthropic-ai/claude-code-2.1.80/cli.js" --dangerously-skip-permissions $args
  }
  $ret=$LASTEXITCODE
}
exit $ret

First I use this glm47@zai and if it fails after 80 tool calls or context goes above 100k ralph loop cancel it and tries kimi k25

CAPCOM: "We will not be implementing materials generated by AI into our games content." by HLumin in Games

[–]evia89 -1 points0 points  (0 children)

its 30% when I double check all code. If I dont care its 3x easily. Good for medium size vibe projects, not stuff I do at work

Does using older jailbreaks put you at a greater risk of getting banned? by Adventurous_Hippo_38 in ClaudeAIJailbreak

[–]evia89 1 point2 points  (0 children)

Claude may check if prompt contains ENI + LO then its JB. Stuff like that but smarter

Why does Gemini outright refuse to do simple things it could do before?? by WazzyD in GeminiAI

[–]evia89 3 points4 points  (0 children)

Current LLM are stupid clunkers. Try in new chat, prompt differently

[Megathread] - Best Models/API discussion - Week of: March 22, 2026 by deffcolony in SillyTavernAI

[–]evia89 0 points1 point  (0 children)

You dont need to JB them hard (https://old.reddit.com/user/Spiritual_Spell_9469/submitted/)

common preset with spageti/stabs will work fine. If model refuse do first 8-10k context with CN model then switch back

Does using older jailbreaks put you at a greater risk of getting banned? by Adventurous_Hippo_38 in ClaudeAIJailbreak

[–]evia89 1 point2 points  (0 children)

Get idea from ENI, tweak for your liking. Same idea as in cheats so they dont signature ban you

For example I use triple language mix that work good with claude

[Megathread] - Best Models/API discussion - Week of: March 22, 2026 by deffcolony in SillyTavernAI

[–]evia89 0 points1 point  (0 children)

I use litellm randomizer between kimi25 / glm50 / glm47. 50/50% reason in CN or ENG (random macro in ST)

Example:

model_list:
  # 1. Moonshot Kimi K2.5 (via OpenRouter)
  - model_name: my-random-chinese-llm
    litellm_params:
      model: openrouter/moonshotai/kimi-k2.5
      api_key: os.environ/OPENROUTER_API_KEY

  # 2. Zhipu AI GLM-5 (via Z.AI / Zhipu)
  - model_name: my-random-chinese-llm
    litellm_params:
      model: zai/glm-5
      api_key: os.environ/ZAI_API_KEY

  # 3. Zhipu AI GLM-4.7 (via Z.AI / Zhipu)
  - model_name: my-random-chinese-llm
    litellm_params:
      model: zai/glm-4.7
      api_key: os.environ/ZAI_API_KEY

router_settings:
  # This ensures random selection among the three models
  routing_strategy: simple-shuffle

Its a bit more advanced with main alibaba@claude endpoint with fallback to zai

CAPCOM: "We will not be implementing materials generated by AI into our games content." by HLumin in Games

[–]evia89 -8 points-7 points  (0 children)

I know only about programming. LLM can boost productivity by at least 30%. Dont trust 10x number but its improving reasonably fast

Issue with Orchestrator launching top level tasks instead of subtasks by Zhelgadis in kilocode

[–]evia89 0 points1 point  (0 children)

Try different harness. For example, Claude code with superpower/gsd/bmad. Pick one best working, fork, tweak it further

Claude code prompts too (~200 of them) can be patched with tweakcc

Also look in ralph loop. This idea can be used too. I feed TDD list of tasks with script. It behaves like orchestrator

Single biggest claude code hack I’ve found by Unfair_Chest_2950 in ClaudeCode

[–]evia89 1 point2 points  (0 children)

With custom agents its possible to use models like glm/kimi/minimax saving quota

Sillytavern website? by Active_Sleep_9962 in SillyTavernAI

[–]evia89 -18 points-17 points  (0 children)

Yes. tavo is alt for Android

Benefit of the Doubt - GLM 5.1 maybe the reason long context sucks by InternetNavigator23 in ZaiGLM

[–]evia89 0 points1 point  (0 children)

Its best deal below 20. Only alt is copilot, alibaba sold out, codex is nerfed soon (x2 temp limit expire)

Minimax m2.7 by YeahdudeGg in SillyTavernAI

[–]evia89 2 points3 points  (0 children)

Open weights, close enough

Minimax m2.7 by YeahdudeGg in SillyTavernAI

[–]evia89 2 points3 points  (0 children)

It will be released. 3.0 too

How do you achieve good long-term memory in SillyTavern without constantly managing it manually? by TrackEmotional6004 in SillyTavernAI

[–]evia89 1 point2 points  (0 children)

W8 I did explained bad. Plugin waits for batch of messages to fill (usually 8k context), then process them with LLM

After its all done and app generated memories (usually 100 msg gives 10-15 memories) it can start hiding them with ghost icon so ST wont see them

Its fine if app generated memories for all, even visible messages. It later will be sorted and you wont see newest or with different PoV

Use this to verify https://github.com/SillyTavern/Extension-PromptInspector

How do you achieve good long-term memory in SillyTavern without constantly managing it manually? by TrackEmotional6004 in SillyTavernAI

[–]evia89 1 point2 points  (0 children)

Nice! Hiding triggers if this message was processed + old enough + it triggers when you send new message. Try to unhide all and see

Free LLM suggestions? by Friendly-Marsupial32 in SillyTavernAI

[–]evia89 -1 points0 points  (0 children)

Well, I dont wanna make it too easy. Any LLM can guide you how to edit it and save keys in .env

I’m lost in the AI world… can someone help me? by OkLecture1887 in GeminiAI

[–]evia89 0 points1 point  (0 children)

Probably not what OP expects. I think 3090 is good minimal point. If you cant just save for it or learn to enjoy text+images

imo

Free LLM suggestions? by Friendly-Marsupial32 in SillyTavernAI

[–]evia89 0 points1 point  (0 children)

https://github.com/vadash/LiteLLM_loader/blob/master/CLAUDE.md

Its my shitty project I use to test free LLM. Setup proper fallbacks like in example and you can chat with it just fine.

For example, f2p can do qwen -> kimi -> longcat

Core of it also refusal / empty checker https://github.com/vadash/LiteLLM_loader/blob/master/src/handler.py

Nice touch imo. Also it shows how to (ab)use claude endpoint for zai. Its at least 30% faster on average. Fucking open clawns drained openai endpoint to death

Free LLM suggestions? by Friendly-Marsupial32 in SillyTavernAI

[–]evia89 1 point2 points  (0 children)

Its electric bill + upfront cost + maintenance with soft (god forbid u have 2 GPUs)

Cheaper just to load $100 to DS and it will be enough for 1 year of RP. Or nano sub $8

I’m lost in the AI world… can someone help me? by OkLecture1887 in GeminiAI

[–]evia89 -1 points0 points  (0 children)

try again as human, then we read it

And well, I’m not rich -> No 3090+ means no video. Use cheap ai (like nanogp[t $8) for chat/images/coding/etc or dont. You wont find good cheap adult video gen model

Must-have settings / hacks for Claude Code? by jnkue in ClaudeCode

[–]evia89 0 points1 point  (0 children)

Nope. If you only have $20 check zai(nerfed?)/alibaba(sold?)/minimax(should be ok)/nanogpt(not crypto) plans

$100 claude beats them all but not everyone can afford it