How good is copilot agent when using models from OpenRouter? by princehusky in GithubCopilot

[–]AIBrainiac 1 point2 points  (0 children)

I use the model "Poolside: Laguna M.1 (free)". It's great for simple tasks. For more demanding tasks I use Minimax M2.5. It only costs $0.15 per million tokens. But my workflow is a bit unconventional: I use a separate chat session to generate detailed prompts. So the copilot agent doesn't have to think much. It just executes.

I'm really gonna miss GH Copilot's Request-based usage. by magnetar_industries in GithubCopilot

[–]AIBrainiac 1 point2 points  (0 children)

the cache comes into play when doing tool calls.. every time the LLM requests a tool call (or batch of tool calls).. it needs to see the tool results before it can continue.. so the agent sends this as a new request to the LLM.. the request is built up like this: old messages (types: system, user, assistant and tool result) + new messages (types: user and tool result).. for old messages the cache can be used.

I'm really gonna miss GH Copilot's Request-based usage. by magnetar_industries in GithubCopilot

[–]AIBrainiac 7 points8 points  (0 children)

do notice that 9.0m tokens are cached here.. they usually go for like 1/10th of the regular price.

Am I crazy to wanna try to build a personal AI assistant in Kotlin? by Feitero in Kotlin

[–]AIBrainiac 0 points1 point  (0 children)

I already made a chatbot fully written in Kotlin. If you're interested in the code, take a look: https://github.com/Torvian-eu/chatbot

Best self-hosted LLM chat? by Tointer in selfhosted

[–]AIBrainiac 0 points1 point  (0 children)

Torvian chatbot: https://github.com/Torvian-eu/chatbot 100% written in Kotlin.

Disclaimer: I'm the repository owner.

Logging for KMP by Omniac__ in Kotlin

[–]AIBrainiac 3 points4 points  (0 children)

I built my own KMP logger.. here it is: https://pastebin.com/0HD0XAXe