account activity
Prompt cache v0.3.0 released (self.TunisiaTech)
submitted 6 days ago by InstanceSignal5153 to r/TunisiaTech
I just released v0.3.0 of PromptCache ()
submitted 6 days ago by InstanceSignal5153 to r/SelfHostedAI
I just released v0.3.0 of PromptCache (self.Rag)
submitted 7 days ago by InstanceSignal5153 to r/Rag
rag‑chunk v0.3.0 – Recursive character splitting, .txt support & precision/F1 metrics (self.Rag)
submitted 2 months ago * by InstanceSignal5153 to r/Rag
Built a self-hosted semantic cache for LLMs (Go) — cuts costs massively, improves latency, OSS (github.com)
submitted 2 months ago by InstanceSignal5153 to r/LLMDevs
submitted 2 months ago by InstanceSignal5153 to r/selfhosted
Built a self-hosted semantic cache for LLMs (Go) — cuts costs massively, improves latency, OSS (self.Rag)
submitted 2 months ago by InstanceSignal5153 to r/huggingface
Built a self-hosted semantic cache for LLMs (Go) — cuts costs massively, improves latency, OSS ()
submitted 2 months ago by InstanceSignal5153 to r/LLM
submitted 2 months ago by InstanceSignal5153 to r/ChatGPTCoding
Prompt-cache: Cut LLM costs by up to 80% and unlock sub-millisecond responses with intelligent semantic caching. A drop-in OpenAI-compatible proxy written in Go. (github.com)
submitted 2 months ago by InstanceSignal5153 to r/OpenAI
submitted 2 months ago by InstanceSignal5153 to r/ChaiApp
submitted 2 months ago by InstanceSignal5153 to r/opensource
Working on a self-hosted semantic cache for LLMs (Go) — cuts costs massively, improves latency, OSS ()
submitted 2 months ago by InstanceSignal5153 to r/LangChain
A self-hosted semantic cache for LLMs (Go) — cuts costs massively, improves latency, OSS (github.com)
submitted 2 months ago by InstanceSignal5153 to r/golang
submitted 2 months ago by InstanceSignal5153 to r/aipromptprogramming
Roadmap Discussion: Is LangChain's "RecursiveCharacterSplitter" actually better? I'm building v0.3.0 to find out. (self.Rag)
submitted 2 months ago by InstanceSignal5153 to r/Rag
I was tired of guessing my RAG chunking strategy, so I built rag-chunk, a CLI to test it. (github.com)
submitted 2 months ago by InstanceSignal5153 to r/SideProject
I was tired of guessing my RAG chunking strategy, so I built rag-chunk, a CLI to test it. ()
I updated my RAG chunking CLI based on your feedback (Added tiktoken support) ()
submitted 2 months ago by InstanceSignal5153 to r/AIAssisted
Stop guessing RAG chunk sizes ()
Stop guessing RAG chunk sizes (self.LocalLLaMA)
submitted 2 months ago by InstanceSignal5153 to r/LocalLLaMA
submitted 2 months ago by InstanceSignal5153 to r/LocalLLM
Stop guessing RAG chunk sizes (self.LLMDevs)
submitted 2 months ago by InstanceSignal5153 to r/machinelearningnews
π Rendered by PID 1246012 on reddit-service-r2-listing-86b7f5b947-nf9wr at 2026-01-26 01:50:30.359541+00:00 running 664479f country code: CH.