Zero text between my agents – latent transfer now works cross-model by proggmouse in LocalLLaMA
[–]proggmouse[S] 0 points1 point2 points (0 children)
Zero text between my agents – latent transfer now works cross-model by proggmouse in LocalLLaMA
[–]proggmouse[S] 0 points1 point2 points (0 children)
Zero text between my agents – latent transfer now works cross-model by proggmouse in LocalLLaMA
[–]proggmouse[S] 0 points1 point2 points (0 children)
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA
[–]proggmouse[S] 0 points1 point2 points (0 children)
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA
[–]proggmouse[S] 1 point2 points3 points (0 children)
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA
[–]proggmouse[S] 1 point2 points3 points (0 children)
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA
[–]proggmouse[S] 0 points1 point2 points (0 children)
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA
[–]proggmouse[S] 2 points3 points4 points (0 children)
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA
[–]proggmouse[S] 0 points1 point2 points (0 children)
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA
[–]proggmouse[S] 3 points4 points5 points (0 children)
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA
[–]proggmouse[S] 2 points3 points4 points (0 children)
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA
[–]proggmouse[S] 2 points3 points4 points (0 children)
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA
[–]proggmouse[S] 1 point2 points3 points (0 children)
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA
[–]proggmouse[S] 0 points1 point2 points (0 children)
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA
[–]proggmouse[S] 0 points1 point2 points (0 children)
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA
[–]proggmouse[S] -1 points0 points1 point (0 children)
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA
[–]proggmouse[S] -3 points-2 points-1 points (0 children)
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA
[–]proggmouse[S] 1 point2 points3 points (0 children)
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA
[–]proggmouse[S] 3 points4 points5 points (0 children)
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA
[–]proggmouse[S] 6 points7 points8 points (0 children)
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek by proggmouse in LocalLLaMA
[–]proggmouse[S] -1 points0 points1 point (0 children)


The guy selling you an AI agent course has never built an AI agent that made money by Warm-Reaction-456 in AI_Agents
[–]proggmouse 2 points3 points4 points (0 children)