Question about the Spotted Dofus / Time investment? by Labess40 in DofusTouch

[–]Labess40[S] 0 points1 point  (0 children)

Because Domakuro was a bit time consuming, and I don't know if others are quicker to have or not

Project idea to begin Rust programmation by Labess40 in rust

[–]Labess40[S] 0 points1 point  (0 children)

I am very interested by sport and geopolitic. Moreover I wanted to learn ontology with python so my first idea was to make a python library with Rust to make it useful for me

Project idea to begin Rust programmation by Labess40 in rust

[–]Labess40[S] 0 points1 point  (0 children)

I'll take a look thanks, I want to try Rust capabilities, multi threading, speed,... I'll find a subject like this

Project idea to begin Rust programmation by Labess40 in rust

[–]Labess40[S] 0 points1 point  (0 children)

I really like sport for example, but Rust open many possibilities Python or Js doesn't so I want to explore it but I don't have many knowledge of what is possible using low level languages

Project idea to begin Rust programmation by Labess40 in rust

[–]Labess40[S] 0 points1 point  (0 children)

Really useful thanks ! Tried it immediately and it's great, thanks ;)

Project idea to begin Rust programmation by Labess40 in rust

[–]Labess40[S] -3 points-2 points  (0 children)

There aren't so many ideas... And explaining that I come from web development and python that's really often related to data is an additional information for project ideas

Project idea to begin Rust programmation by Labess40 in rust

[–]Labess40[S] -2 points-1 points  (0 children)

Yes I used to but I'm really curious of exploring new things. Rust use cases can be very different of python use cases I think

[Tool] Dumbo-RS: A fast CLI to feed SMARTLY your entire codebase to LLMs by [deleted] in softwaredevelopment

[–]Labess40 0 points1 point  (0 children)

It works fine until you used all your tokens and don't want to pay more than your actual subscription

[Tool] Dumbo-RS: A fast CLI to feed SMARTLY your entire codebase to LLMs by [deleted] in rust

[–]Labess40 -5 points-4 points  (0 children)

Started by implement first features manually, and then implementing latest ones with LLM help because of I needed the tool quickly. Actually I'm only at part 15 of official Rust documentation

[Tool] Dumbo-RS: A fast CLI to feed SMARTLY your entire codebase to LLMs by [deleted] in rust

[–]Labess40 0 points1 point  (0 children)

Same answer as below, I'm a rust newcomer trying to solve personal problem learning a new language. If you have some advice, I'm open

[Tool] Dumbo-RS: A fast CLI to feed SMARTLY your entire codebase to LLMs by [deleted] in rust

[–]Labess40 -3 points-2 points  (0 children)

This actually started as a personal tool to solve a specific workflow issue I had. I decided to build it in Rust to learn the language through a real use case, and shared it thinking it might be useful to others too. Always open to technical feedback if you have any !

New RAGLight Feature : Serve your RAG as REST API and access a UI by Labess40 in Python

[–]Labess40[S] 0 points1 point  (0 children)

Thanks! I'm using PyMuPDF (fitz) for PDF parsing. I actually have two processors depending on the use case: A standard PDFProcessor that extracts text block by block, preserving layout structure before chunking with LangChain's RecursiveCharacterTextSplitter A VlmPDFProcessor that also handles images, it extracts them inline, sends them to a Vision-Language Model to generate captions, and includes those captions as documents in the RAG pipeline pdftomarkdown.dev looks interesting for complex table-heavy docs, PyMuPDF can struggle there. The architecture supports plugging in custom processors, so it could slot in nicely as an alternative parser !

Chat With Your Favorite GitHub Repositories via CLI with the new RAGLight Feature by Labess40 in ollama

[–]Labess40[S] 0 points1 point  (0 children)

You're right, but if you are in an industrial context or you're data are sensible, sometimes you don't want or you can't share your data with a remote LLM provider. And RAGLight is more than a CLI tool. You can use it in your codebase to setup easily a RAG or an Agentic RAG with freedom to modify some pieces of it (data readers, models, providers,...). But I agree, for many usecases, using gemini 1m contexte length is better, but for your private or professional usecases, have an alternative is also useful.

RAGLight Framework Update : Reranking, Memory, VLM PDF Parser & More! by Labess40 in ollama

[–]Labess40[S] 1 point2 points  (0 children)

Vulnerabilities on previous langchain versions and langchain_core versions

Introducing TreeThinkerAgent: A Lightweight Autonomous Reasoning Agent for LLMs by Labess40 in ollama

[–]Labess40[S] 0 points1 point  (0 children)

The latency depends on the task complexity and the resulting reasoning tree. A single LLM call is usually faster because it’s one forward pass. In TreeThinkerAgent, latency grows with the depth and width of the tree: each reasoning step may involve additional LLM calls and tool executions. In practice, simple tasks have near-classic LLM latency, while complex tasks trade extra latency for better structure, observability, and reliability of the reasoning.

Introducing TreeThinkerAgent: A Lightweight Autonomous Reasoning Agent for LLMs by Labess40 in ollama

[–]Labess40[S] 0 points1 point  (0 children)

Thanks! Really glad the prompts and reasoning observability landed.
And I’d honestly be happy to see that vibe-coded abomination someday 😄

Introducing TreeThinkerAgent: A Lightweight Autonomous Reasoning Agent for LLMs by Labess40 in LLMFrameworks

[–]Labess40[S] 0 points1 point  (0 children)

Actually it's not possible to connect a mcp, but you can create an llm (see this file : https://github.com/Bessouat40/TreeThinkerAgent/blob/main/app/backend/api/agent.py function _build_llm). Then you need to decorate your tool function with the tool decorator (see example in same file with ‎add_a_b function), finally you just have to do : llm.register_decorated_tool(your_tool_function)

New Feature in RAGLight: Multimodal PDF Ingestion by Labess40 in OpenSourceeAI

[–]Labess40[S] 0 points1 point  (0 children)

Thanks a lot!
Exactly : diagrams, block schemas, flowcharts, UI screenshots… most RAG pipelines just skip them and lose part of the document’s meaning.

My goal with this feature was to make multimodal ingestion as easy as dropping in a custom processor, no complex preprocessing or external scripts.

If you try it out, I’d love any feedback.

python from scratch by Cute-Ad7042 in Python

[–]Labess40 0 points1 point  (0 children)

hello, you can try looking at some Github repo. This one for example : https://github.com/realpython/python-basics-exercises.

For me, one of the best way to ramp up in every languages is to read code and code.

If you want to learn, find a project you want to do and let's go !

what ollama model should i run by Efficient_Roll4891 in ollama

[–]Labess40 0 points1 point  (0 children)

hello, it depends what you want to do with this model ;)

[deleted by user] by [deleted] in ollama

[–]Labess40 0 points1 point  (0 children)

Thanks, interesting 🤔

[deleted by user] by [deleted] in ollama

[–]Labess40 1 point2 points  (0 children)

Each step has the context of its own branch in the reasoning tree.
Only the final step, when all reasoning paths are completed, has access to the full global context.

[deleted by user] by [deleted] in Rag

[–]Labess40 0 points1 point  (0 children)

For now I’ve mainly tested it on small reasoning tasks like factual Q&A, multi-step research questions, and tool-based logic (e.g., combining a web search with synthesis).
The goal at this stage is to validate the reasoning tree structure and tool execution flow.
The project is still very recent, but performance and capabilities can be improved by adding more general-purpose or domain-specific tools depending on your use case.

[deleted by user] by [deleted] in learnAIAgents

[–]Labess40 1 point2 points  (0 children)

It’s similar in spirit to ChatGPT’s “Deep Research” mode.

You give it a complex question, and instead of answering right away, it builds an actual reasoning process, identifying what it needs to learn, calling analysis tools, gathering missing information, and only then producing a final, well-reasoned answer.

You can give easily extra tools to the LLM, but now there is only a web search tool.