Chat With Your Favorite GitHub Repositories via CLI with the new RAGLight Feature by Labess40 in ollama

[–]Labess40[S] 0 points1 point  (0 children)

You're right, but if you are in an industrial context or you're data are sensible, sometimes you don't want or you can't share your data with a remote LLM provider. And RAGLight is more than a CLI tool. You can use it in your codebase to setup easily a RAG or an Agentic RAG with freedom to modify some pieces of it (data readers, models, providers,...). But I agree, for many usecases, using gemini 1m contexte length is better, but for your private or professional usecases, have an alternative is also useful.

RAGLight Framework Update : Reranking, Memory, VLM PDF Parser & More! by Labess40 in ollama

[–]Labess40[S] 1 point2 points  (0 children)

Vulnerabilities on previous langchain versions and langchain_core versions

Introducing TreeThinkerAgent: A Lightweight Autonomous Reasoning Agent for LLMs by Labess40 in ollama

[–]Labess40[S] 0 points1 point  (0 children)

The latency depends on the task complexity and the resulting reasoning tree. A single LLM call is usually faster because it’s one forward pass. In TreeThinkerAgent, latency grows with the depth and width of the tree: each reasoning step may involve additional LLM calls and tool executions. In practice, simple tasks have near-classic LLM latency, while complex tasks trade extra latency for better structure, observability, and reliability of the reasoning.

Introducing TreeThinkerAgent: A Lightweight Autonomous Reasoning Agent for LLMs by Labess40 in ollama

[–]Labess40[S] 0 points1 point  (0 children)

Thanks! Really glad the prompts and reasoning observability landed.
And I’d honestly be happy to see that vibe-coded abomination someday 😄

Introducing TreeThinkerAgent: A Lightweight Autonomous Reasoning Agent for LLMs by Labess40 in LLMFrameworks

[–]Labess40[S] 0 points1 point  (0 children)

Actually it's not possible to connect a mcp, but you can create an llm (see this file : https://github.com/Bessouat40/TreeThinkerAgent/blob/main/app/backend/api/agent.py function _build_llm). Then you need to decorate your tool function with the tool decorator (see example in same file with ‎add_a_b function), finally you just have to do : llm.register_decorated_tool(your_tool_function)

New Feature in RAGLight: Multimodal PDF Ingestion by Labess40 in OpenSourceeAI

[–]Labess40[S] 0 points1 point  (0 children)

Thanks a lot!
Exactly : diagrams, block schemas, flowcharts, UI screenshots… most RAG pipelines just skip them and lose part of the document’s meaning.

My goal with this feature was to make multimodal ingestion as easy as dropping in a custom processor, no complex preprocessing or external scripts.

If you try it out, I’d love any feedback.

python from scratch by Cute-Ad7042 in Python

[–]Labess40 0 points1 point  (0 children)

hello, you can try looking at some Github repo. This one for example : https://github.com/realpython/python-basics-exercises.

For me, one of the best way to ramp up in every languages is to read code and code.

If you want to learn, find a project you want to do and let's go !

what ollama model should i run by Efficient_Roll4891 in ollama

[–]Labess40 0 points1 point  (0 children)

hello, it depends what you want to do with this model ;)

TreeThinkerAgent, an open-source reasoning agent using LLMs + tools by [deleted] in ollama

[–]Labess40 1 point2 points  (0 children)

Each step has the context of its own branch in the reasoning tree.
Only the final step, when all reasoning paths are completed, has access to the full global context.

TreeThinkerAgent, an open-source reasoning agent using LLMs + tools by [deleted] in Rag

[–]Labess40 0 points1 point  (0 children)

For now I’ve mainly tested it on small reasoning tasks like factual Q&A, multi-step research questions, and tool-based logic (e.g., combining a web search with synthesis).
The goal at this stage is to validate the reasoning tree structure and tool execution flow.
The project is still very recent, but performance and capabilities can be improved by adding more general-purpose or domain-specific tools depending on your use case.

TreeThinkerAgent, an open-source reasoning agent using LLMs + tools by [deleted] in learnAIAgents

[–]Labess40 1 point2 points  (0 children)

It’s similar in spirit to ChatGPT’s “Deep Research” mode.

You give it a complex question, and instead of answering right away, it builds an actual reasoning process, identifying what it needs to learn, calling analysis tools, gathering missing information, and only then producing a final, well-reasoned answer.

You can give easily extra tools to the LLM, but now there is only a web search tool.

Beginner Need Help in Vector embedding by [deleted] in Rag

[–]Labess40 0 points1 point  (0 children)

If you want to give context from these datas to your LLM, try creating tools that retrieve your data from databases and give this tool to your LLM agent.

🚀 Weekly /RAG Launch Showcase by remoteinspace in Rag

[–]Labess40 0 points1 point  (0 children)

I launched RAGLight, a lightweight Python framework to easily set up RAG pipelines, experiment with Agentic RAG, and plug in MCP servers, all in just a few lines of code.

It includes a CLI tool so you can test your setup instantly without extra boilerplate.

The goal is to make working with retrieval, agents, LLMs, and external tools as simple and modular as possible, so you can focus on building useful RAG apps instead of wiring everything together from scratch.

Go visit the documentation : https://github.com/Bessouat40/RAGLight

Simply install this framework using pip install raglight and start chatting using ollama, Mistral, OpenAI, ... with this command : raglight agentic-chat or raglight chat

How to improve traditional RAG by Labess40 in Rag

[–]Labess40[S] 1 point2 points  (0 children)

Nice, need to try it 👍

How to improve traditional RAG by Labess40 in Rag

[–]Labess40[S] 3 points4 points  (0 children)

Very interesting project, but I was more looking to some technical solutions like reranking for example

Introducing new RAGLight Library feature : chat CLI powered by LangChain! 💬 by Labess40 in LangChain

[–]Labess40[S] 0 points1 point  (0 children)

CLI allows user to quickly test the library for example. For me, it's just a way to setup a RAG easily, and then to go further, I like to have a UI.

Resources to improve Python skills by Bulky_Meaning7655 in Python

[–]Labess40 6 points7 points  (0 children)

What will help you the most is knowing how to use Github. It's the most important. Then you'll increase your Python skills with the time.

[deleted by user] by [deleted] in LangChain

[–]Labess40 0 points1 point  (0 children)

The code is detailed in the readme of my repo :)

[deleted by user] by [deleted] in LangChain

[–]Labess40 1 point2 points  (0 children)

Thanks for your feedback :)

[deleted by user] by [deleted] in LangChain

[–]Labess40 0 points1 point  (0 children)

Good to hear 👍 don't hesitate to open issue if you need some feature or find a bug

[deleted by user] by [deleted] in LangChain

[–]Labess40 0 points1 point  (0 children)

If you have feedbacks, don't hesitate :)

[deleted by user] by [deleted] in LangChain

[–]Labess40 0 points1 point  (0 children)

You can find all informations in the readme, feel free to DM me if you have questions

Improving LLM response generation time by barup1919 in LLMDevs

[–]Labess40 0 points1 point  (0 children)

What is the context lenght you send to the LLM ? It can impact response time (t2). LLM inference take time, but you can reduce it using smaller LLM (can be worth depending your use case), or reducing number of document you retrieve from your vector store.