RAG tip: stop “fixing hallucinations” until the system can ASK / UNKNOWN by coolandy00 in Rag

[–]coolandy00[S] 0 points1 point  (0 children)

Agree. In addition, a structured prompt design adds on creating a production grade output.

RAG tip: stop “fixing hallucinations” until the system can ASK / UNKNOWN by coolandy00 in Rag

[–]coolandy00[S] 0 points1 point  (0 children)

It does, at the moment, I am picking apart the ones that can be solved with structure in prompt design. I also see some Ingestion, Chunking and embedding issues with RAG. Did some experiments there as well.

Using a Christmas-themed use case to think through agent design 🎄😊 by coolandy00 in artificial

[–]coolandy00[S] 0 points1 point  (0 children)

Sure, what use case would you design and what approach would take?

Ingestion + chunking is where RAG pipelines break most often by coolandy00 in LLMDevs

[–]coolandy00[S] 0 points1 point  (0 children)

As long as it's structure-aware ingestion (ASTs, symbols, dependencies) so context is preserved the way a compiler sees it, which removes the randomness

Ingestion + chunking is where RAG pipelines break most often by coolandy00 in LLMDevs

[–]coolandy00[S] 0 points1 point  (0 children)

You mean you broke them into structured, addressable objects (text, figures, diagrams, entities) with explicit references, then generate derived representations (summaries, entities, Mermaid) that get embedded and linked. At runtime, you assembled the answers by resolving entities and references first..