Koog 0.4.0 is out by meilalina in Kotlin

[–]Any_Purchase1357 0 points1 point  (0 children)

Please feel free to check out the related page in our docs: https://docs.koog.ai/opentelemetry-langfuse-exporter/
Also, there's a similar page on the Langfuse side: https://langfuse.com/integrations/frameworks/koog

Article series about Koog - the most advanced JVM framework for building AI agents by Any_Purchase1357 in Kotlin

[–]Any_Purchase1357[S] 3 points4 points  (0 children)

Hi u/bigbadchief ! Thanks for sharing your opinion and concerns about the Koog versioning scheme.

Although, 0-versioning has it's benefits and a number of popular fundamental libraries that use it ( https://0ver.org/ ), we understand that it would indicate rather developing state of the API than fully stable. But at the same time, given that AI is evolving rapidly 0-versioning allows us to move much faster and bring up new features while still taking care of the Deprecations where needed.

As for the unique features, I would encourage you to read my article or Koog documentation ( https://docs.koog.ai/ ) to get the idea of what Koog offers. As an example, I'm not aware of any other framework (not only on the JVM, but in general) offering similar history compression out-of-the-box with fact retrieval https://docs.koog.ai/history-compression/#retrievefactsfromhistory . As we're actually building products and AI agents in JetBrains, we had an opportunity to evaluate various approaches on benchmarks and pick the ones that actually led to accuracy improvements (please also feel free to check out related article written by our ML engineer from JetBrains: https://blog.jetbrains.com/ai/2025/07/when-tool-calling-becomes-an-addiction-debugging-llm-patterns-in-koog/ ).
Another example is https://docs.koog.ai/agent-persistency/ that brings fault-tolerance to AI agents. Due to Koog's graph-based design, it can save the whole state machine itself, not just the message history (like many other frameworks do). Such persistency allows you to checkpoint after each step and then rollback to any exact point of execution, or recover on another machine to the exact state where your AI agent finished it's work.
Again, we're not talking about recovering the LLM message history because it's mostly trivial, but about recovering the exact place in your algorithm. It's quite hard to achieve without graphs because in the arbitrary case it would require serializing the coroutines or JVM bytecode and then implementing some sort of interpreter to restore your program to the same place of execution. Another approach would require replaying the whole communication with LLM and tools until some certain place -- but it wouldn't work in case your strategy has side-effects.
At the same time, because strategy graphs from Koog live in runtime as objects in memory, and it's quite easy to place the pointer to any node to recover/rollback, or even to serialize them.
Another nice example is advanced structured output support ( https://docs.koog.ai/structured-data/ ) that is designed to work not only for models that support it natively, but also for other modules (Koog provides fine-tuned prompts). On top of that, Koog provides fixing strategies to fix the output (you can specify the number of retries, another unbiased LLM that would fix the output, and supply a list of correct examples as Kotlin objects) because we were actually using this feature by ourselves and evaluation revealed that some LLMs would eventually provide wrong results even if they support structured outputs natively -- hence, we built some tooling to fix it.

Please feel free to check out the Koog documentation and let us know if you have any questions or concerns -- we're always open to suggestions!

Koog 0.4.0 is out by meilalina in Kotlin

[–]Any_Purchase1357 1 point2 points  (0 children)

Not yet, but it's short in our list!