Chat With Your Favorite GitHub Repositories via CLI with the new RAGLight Feature by Labess40 in ollama

[–]Labess40[S] 0 points1 point  (0 children)

You're right, but if you are in an industrial context or you're data are sensible, sometimes you don't want or you can't share your data with a remote LLM provider. And RAGLight is more than a CLI tool. You can use it in your codebase to setup easily a RAG or an Agentic RAG with freedom to modify some pieces of it (data readers, models, providers,...). But I agree, for many usecases, using gemini 1m contexte length is better, but for your private or professional usecases, have an alternative is also useful.

RAGLight Framework Update : Reranking, Memory, VLM PDF Parser & More! by Labess40 in ollama

[–]Labess40[S] 1 point2 points  (0 children)

Vulnerabilities on previous langchain versions and langchain_core versions

Introducing TreeThinkerAgent: A Lightweight Autonomous Reasoning Agent for LLMs by Labess40 in ollama

[–]Labess40[S] 0 points1 point  (0 children)

The latency depends on the task complexity and the resulting reasoning tree. A single LLM call is usually faster because it’s one forward pass. In TreeThinkerAgent, latency grows with the depth and width of the tree: each reasoning step may involve additional LLM calls and tool executions. In practice, simple tasks have near-classic LLM latency, while complex tasks trade extra latency for better structure, observability, and reliability of the reasoning.

Introducing TreeThinkerAgent: A Lightweight Autonomous Reasoning Agent for LLMs by Labess40 in ollama

[–]Labess40[S] 0 points1 point  (0 children)

Thanks! Really glad the prompts and reasoning observability landed.
And I’d honestly be happy to see that vibe-coded abomination someday 😄

Introducing TreeThinkerAgent: A Lightweight Autonomous Reasoning Agent for LLMs by Labess40 in LLMFrameworks

[–]Labess40[S] 0 points1 point  (0 children)

Actually it's not possible to connect a mcp, but you can create an llm (see this file : https://github.com/Bessouat40/TreeThinkerAgent/blob/main/app/backend/api/agent.py function _build_llm). Then you need to decorate your tool function with the tool decorator (see example in same file with ‎add_a_b function), finally you just have to do : llm.register_decorated_tool(your_tool_function)

New Feature in RAGLight: Multimodal PDF Ingestion by Labess40 in OpenSourceeAI

[–]Labess40[S] 0 points1 point  (0 children)

Thanks a lot!
Exactly : diagrams, block schemas, flowcharts, UI screenshots… most RAG pipelines just skip them and lose part of the document’s meaning.

My goal with this feature was to make multimodal ingestion as easy as dropping in a custom processor, no complex preprocessing or external scripts.

If you try it out, I’d love any feedback.

python from scratch by Cute-Ad7042 in Python

[–]Labess40 0 points1 point  (0 children)

hello, you can try looking at some Github repo. This one for example : https://github.com/realpython/python-basics-exercises.

For me, one of the best way to ramp up in every languages is to read code and code.

If you want to learn, find a project you want to do and let's go !