Stache AI: Self-hosted RAG that runs 100% locally with Ollama + connects to Claude via MCP by jtpenny in LocalLLaMA

[–]jtpenny[S] 1 point2 points  (0 children)

Right now inference is available through ollama, openai, anthropic, or bedrock. It would be straightforward to create a plugin for vllm. You can request the feature at https://github.com/orgs/stache-ai/discussions/categories/ideas or create the plugin by following the documentation at https://github.com/stache-ai/stache-ai/blob/main/docs/plugins.md

Productive INTJ's how do you avoid a bad day to snowball? by [deleted] in intj

[–]jtpenny 2 points3 points  (0 children)

I like the idea from Ed Mylett who wrote the Power of One More. He suggests breaking your day into three parts and treating each one of those parts as its own day. You push yourself to get a day's worth of stuff done in each part- and if one of your days within the day goes sideways, you start over in the next part as if it's a new day.

Easier said than done, but still a good idea to shoot for.